Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different result in same config #38

Closed
zyn810039594 opened this issue Jul 9, 2023 · 16 comments
Closed

Different result in same config #38

zyn810039594 opened this issue Jul 9, 2023 · 16 comments

Comments

@zyn810039594
Copy link

Hello!
I've found a strange situation when I simulate double same cores. They just...have different ability. They have same flash, same ram, same bus and same perp. But they just do like this:
image
So why can they be that? I just want to use your core to make a double core lockstep(I'll pull it when it tests completely), but this strange "characteristic" make it impossible.....
Wish for you help!

@zyn810039594
Copy link
Author

My plan is:
Lock the cache ram and bus port, master core has a full-core, and the slave core doesn't have debug module,and cache ram. Comparing module will give slave core data from cached data of master core, and compare output with the cached data.
But all of them need a absolute synchronous core......
Or it can't?

@Dolu1990
Copy link
Member

Hi ^^

It maybe due to the randomized state of the branch predictor after boot ? (in part at least)
Which tool are you using to run the simulation ? Verilator ?

Regards
Charles

@zyn810039594
Copy link
Author

I use IVerilog in this project(because of my sim model), and I set all the reg without initial and mem random. I try to init all the reg with zero, but it does same. So, maybe it's because the mem is random at first?

@zyn810039594
Copy link
Author

I try to initial mem with zero, and it succeeds. The problem is at the GShare memory?I'll have a test, but I don't want to change all the mem.(And, when I finish it, should I pull it directly or make another project?)

@zyn810039594
Copy link
Author

Now the problem is that it seems random comes from more or other places, initial only for branch Mem seems useless......

@Dolu1990
Copy link
Member

The problem is at the GShare memory

Yes but not only, there is overall :

  • branch prediction gshare + btb
  • lsu hit predictors + hazard prediction

@zyn810039594
Copy link
Author

The problem is at the GShare memory

Yes but not only, there is overall :

  • branch prediction gshare + btb
  • lsu hit predictors + hazard prediction

Mems of lsu and hazard can be replaced by register to add a reset value, but mems of gshare and btb is too big to be replaced...Is there any way to solve that?

@Dolu1990
Copy link
Member

For simulation, you could add a initBigInt on them.

If in for the SpinalConfig, you do a .includeSimulation, it will include some of them :
https://github.com/SpinalHDL/NaxRiscv/blob/main/src/main/scala/naxriscv/lsu2/Lsu2Plugin.scala#L466

So you can do :

object Gen extends App{
    ...
    val spinalConfig = SpinalConfig(inlineRom = true)
    spinalConfig.includeSimulation
     ...

@zyn810039594
Copy link
Author

For simulation, you could add a initBigInt on them.

If in for the SpinalConfig, you do a .includeSimulation, it will include some of them : https://github.com/SpinalHDL/NaxRiscv/blob/main/src/main/scala/naxriscv/lsu2/Lsu2Plugin.scala#L466

So you can do :

object Gen extends App{
    ...
    val spinalConfig = SpinalConfig(inlineRom = true)
    spinalConfig.includeSimulation
     ...

I just wanna make it synthesizable...but some designes are not suitable for synthese as register(I mean, gshare and btb, they are too big to change to register)

@Dolu1990
Copy link
Member

What FPGA / ASIC are you targetting ?

@zyn810039594
Copy link
Author

What FPGA / ASIC are you targetting ?

FPGA verification, and then, ASIC. So it shouldn't have any initialized ram in it.

@Dolu1990
Copy link
Member

hmm, i guess one solution to stay safe would be to add a few more initialisation state machine to write every memories of the design to zero (using some counter to go on every addresses).

It is already done for the i$ d$ tags. Could be added for the other memories

@Dolu1990
Copy link
Member

@zyn810039594
Copy link
Author

hmm, i guess one solution to stay safe would be to add a few more initialisation state machine to write every memories of the design to zero (using some counter to go on every addresses).

It is already done for the i$ d$ tags. Could be added for the other memories

So is there anyway to give a init value to mems that generate to register?

@Dolu1990
Copy link
Member

Dolu1990 commented Aug 4, 2023

No there is not, unless you blackbox everything and fix it in the blackbox themself.
Else, i would say, the most portable would still be to have FSM to all the memories of any kind to init them.

@zyn810039594
Copy link
Author

Everything solved, close it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants