How to use counters similiar to verilog using for loops?


#1

Hello,

I have to instantiate a chain of counters and how i do it currently is as below:

NB_CHAIN_COUNTERS=2

def dq_proc(clk,q,d):
@always(clk)
def dq_process():
q.next = d
return dq_process

data_in_sig_inst=[None for i in range(NB_CHAIN_COUNTERS)]
data_in=[Signal(int(0)) for i in range(NB_CHAIN_COUNTERS)]
for i in range(NB_CHAIN_COUNTERS):
|tab| data_in_sig_inst[i]=dq_proc(clk, data_in[i], (data_in[i]+1) )

This does not work.
Surprisingly the above code works if i assign (data_in[i]+1) to a signal seperately and pass it to the dq_proc method.
Can someone propose an elegant solution ?

Thanks and regards
krs


#2

Here is a better version of your code :

NB_CHAIN_COUNTERS = 2

def dq_proc(clk,q,d):
    @always_seq(clk.posedge, reset=None)
    def dq_process():
        q.next = d
    return dq_process

data_in_sig_inst = []
data_in = [Signal(int(0)) for i in range(NB_CHAIN_COUNTERS)]

for i in range(NB_CHAIN_COUNTERS-1) :
    inst = dq_proc(clk, data_in[i], (data_in[i]+1) )
    data_in_sig_inst.append(inst)

return data_in_sig_inst

The above code will not work as is since it is missing “context” code.
data_in is unconstrained (no size). This can work in simulation but not in conversion.
What do you want to achieve ?

Please, note that q and d parameters of dp_proc() function can be either booleans or bit_vectors. This is one aspect of MyHdl power.

note : Please put your code in code-formatting tag to preserve indentation.


#3

Hello DrPi,

Thanks for the reply,
The code doesnot give any error, but the issue here is that data_in is always assigned to 1, so the counter is not incrementing.

if i instead use this code:

def dq_proc(clk,q,d):
    @always_seq(clk.posedge, reset=None)
    def dq_process():
        q.next = d + 1
    return dq_process

and pass

dq_proc(clk, data_in[i], (data_in[i]) )

it works, but is there any specific reason that i cannot pass the Signal- (data_in[i]+1) to the method ?

Please note that i also want many counters in parallel, hence I have a parameter for chain counter

NB_CHAIN_COUNTERS = 10

This code is only for simulation ( i have tried my best to narrow the issue down to this code snippet).
Maybe I’m missing something as well.

Thanks
krs


#4

Why don’t you use a simple bit_vector with incrementation ?
Something like this :

    counter = Signal(intbv(0)[8:])

    @always_seq(clk.posedge, reset=None)
    def count_process():
        counter.next = counter + 1

#5

Hello DrPi,

Thank you for the reply.

But is it possible to have a parametrizable parallel counters by this method ?
I want to instantiate n counters with different initial values.


#6

You need to put the increment value in the generator and adjust the item selects (i) to get the correct values from the list of signals. Not sure if your (data_in[i]+1) was incorrectly selecting the wrong list item or trying to increment the signal.

def upper_level_block():
    NB_CHAIN_COUNTERS = 2
    
    def dq_proc(clk, q, d, incval=1):
        @always_seq(clk.posedge, reset=None)
        def dq_process():
            q.next = d + incval
        return dq_process
    
    data_in_sig_inst = []
    data_in = [Signal(int(0)) for i in range(NB_CHAIN_COUNTERS)]
    
    for i in range(NB_CHAIN_COUNTERS-1) :
        inst = dq_proc(clk, data_in[i], data_in[i+1] )
        data_in_sig_inst.append(inst)
    
    return data_in_sig_inst

#7

Hello cfelton,

it answers the question somehow i think.
My intention is to have just one one dq_prpoc, which is just a register (singe bit or multibit) and to use the same register method for different operations like counting, addition, subtraction etc, by passing the right hand side operant from the instantiation code. This is ideally to reduce the code footprint and for maximum configurability.
I’m writing an example below:

def dq_proc(clk, reset, q, d, init_val):
        @always(clk.posedge, reset.posedge)
        def dq_process():
            if reset==1:
               q.next = init_val
            else:
               q.next = d
        return dq_process

def upper_level_block():
    NB_CHAIN_COUNTERS = 10
    NB_CHAIN_ADDERS=10

    inst = []
    data_in = [Signal(int(0)) for i in range(NB_CHAIN_COUNTERS)]
    a = [Signal(int(random.randint(0x0100,0xffff)) for i in range(NB_CHAIN_ADDERS)]
    b = [Signal(int(random.randint(0x0100,0xffff)) for i in range(NB_CHAIN_ADDERS)]
    c = [Signal(int(0)) for i in range(NB_CHAIN_ADDERS)]


    
   # counter instantiation
    for i in range(NB_CHAIN_COUNTERS) :
        inst.append(dq_proc(clk, reset, data_in[i], data_in[i]+1, int( i*10) )  # there is an issue, i cannot pass data_in[i] + 1 for counter as it is supposed to be incremented on the next clock cycle

   # adder instantiation
    for i in range(NB_CHAIN_ADDERS) :
        inst.append(dq_proc(clk, reset, c[i], a[i]+ b[i], int(0) )  # there is an issue, i cannot pass a[i] + b[i]  

    return inst
  

This is just an example code for demonstration, but basically my idea was to use only one dq_process and as an arguments instantiate different operations based on user requirements.

It looks like such a dynamic configurability is currently not possible.
The way is to have different methods for counters, adders, subtractors etc and to instantiate each operation seperately.

Is this what you mean?
Or, is it possible to achieve this in some way ?

Thank you


#8

This is feasible but this is not as straight forward as you’d like :

    # adder instantiation 
    add = [Signal(int(0)) for i in range(NB_CHAIN_ADDERS)]
    @always_comb()
    def add_proc()
        add[i].next = a[i]+ b[i]
    inst.append(add_proc)

    for i in range(NB_CHAIN_ADDERS) : 
        inst.append(dq_proc(clk, reset, c[i], add[i], int(0) )

note : You have to specify signal width either your design will not convert.


#9

Hello DrPi,

Thanks for the reply.
A question is whether this combinational adder which is registered has any inherent disadvantage related to timings of an adder which is synchronized to clock edges ?

Code snippets:

add = [Signal(int(0)) for i in range(NB_CHAIN_ADDERS)]
    @always_comb()
    def add_proc()
        add[i].next = a[i]+ b[i]
    inst.append(add_proc)

    for i in range(NB_CHAIN_ADDERS) : 
        inst.append(dq_proc(clk, reset, c[i], add[i], int(0) )

and

c = [Signal(int(0)) for i in range(NB_CHAIN_ADDERS)]
a = [Signal(int(10)) for i in range(NB_CHAIN_ADDERS)]
b = [Signal(int(20)) for i in range(NB_CHAIN_ADDERS)]

def add_proc(clk, reset, q, a, b):
  @always(clk.posedge)
   def add_process()
        q.next = a+ b
   return add_process
 
for i in range(NB_CHAIN_ADDERS) : 
  inst.append(add_proc(clk, reset, c[i], a[i], b[i] )

Does both behave the same way in relation to timings?
I guess the first method would need more timing checks than the second one (for ex. if a and b comes from external inputs)


#10

From a logic perspective, both are identical.
In the q.next = a+b statement, the add signal is implicit but exists.
In both cases, there is an adder followed by a register.


#11

@krs there might be some limitations, you can’t code it like your example, you need to separate elaboration from the convertible/synthesizable.

def generic_adder(clock, reset, x, y, c):
    @always_seq(clock.posedge, reset=reset)
    def beh():
           y.next = x + c
    return beh

In this example, you can pass positive, negative, etc. value to c, you can also pass a signal. You can’t (how your example is defined) pass a more complicated expression. Also, the init_val is redundant, you can you the initial value when you create the signal as the initial value.