通用加法器 "inference architecture": 模拟错误
generic adder "inference architecture": simulation error
所以,我必须创建一个带有进位和执行的通用 N 位加法器。
到目前为止,我已经制作了两种完全可用的架构,一种使用生成功能,另一种使用 rtl 描述如下:
实体:
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
entity adder_n is
generic (N: integer:=8);
port (
a,b: in std_logic_vector(0 to N-1);
cin: in std_logic;
s: out std_logic_vector(0 to N-1);
cout: out std_logic);
end adder_n;
架构 1 和 2:
--STRUCT
architecture struct of adder_n is
component f_adder
port (
a,b,cin: in std_logic;
s,cout: out std_logic);
end component;
signal c: std_logic_vector(0 to N);
begin
c(0)<=cin;
cout<=c(N);
adders: for k in 0 to N-1 generate
A1: f_adder port map(a(k),b(k),c(k),s(k),c(k+1));
end generate adders;
end struct;
--END STRUCT
architecture rtl of adder_n is
signal c: std_logic_vector(1 to N);
begin
s<=(a xor b) xor (cin&c(1 to N-1));
c<=((a or b) and (cin&c(1 to N-1))) or (a and b);
cout<=c(N);
end rtl;
现在,我的问题出在我试图推断加法器的第三个体系结构中。尽管我创建的以下架构编译得很好,但当我尝试对其进行仿真时,我遇到了一个仿真错误(在 Modelsim 上),我已将其附在本文末尾 post。
我猜 numeric_std 定义有问题。我试图避免使用 arith 库,并且我仍在努力适应 IEEE 标准。
欢迎任何想法!谢谢!
推理拱门:
--INFERENCE
architecture inference of adder_n is
signal tmp: std_logic_vector(0 to N);
signal atmp, btmp, ctmp, add_all : integer :=0;
signal cin_usgn: std_logic_vector(0 downto 0);
signal U: unsigned(0 to N);
begin
atmp <= to_integer(unsigned(a));
btmp <= to_integer(unsigned(b));
cin_usgn(0) <= cin;
ctmp <= to_integer(unsigned(cin_usgn));
add_all <= (atmp + btmp + ctmp);
U <= to_unsigned(add_all,N);
tmp <= std_logic_vector(U);
s <= tmp(0 to N-1);
cout <= tmp(N);
end inference;
-- END
模拟错误:
# Cannot continue because of fatal error.
# HDL call sequence:
# Stopped at C:/altera/14.1/modelsim_ase/test1_simon/adder_inference.vhd 58 Architecture inference
U的长度为N+1(0到N)
改变
U <= to_unsigned(add_all,N);
到
U <= to_unsigned(add_all,N+1);
将防止 adder_n
的架构 inference
中信号分配的左侧和右侧之间的长度不匹配。
传递给to_unsigned
的参数指定了长度。
所以,我必须创建一个带有进位和执行的通用 N 位加法器。 到目前为止,我已经制作了两种完全可用的架构,一种使用生成功能,另一种使用 rtl 描述如下:
实体:
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
entity adder_n is
generic (N: integer:=8);
port (
a,b: in std_logic_vector(0 to N-1);
cin: in std_logic;
s: out std_logic_vector(0 to N-1);
cout: out std_logic);
end adder_n;
架构 1 和 2:
--STRUCT
architecture struct of adder_n is
component f_adder
port (
a,b,cin: in std_logic;
s,cout: out std_logic);
end component;
signal c: std_logic_vector(0 to N);
begin
c(0)<=cin;
cout<=c(N);
adders: for k in 0 to N-1 generate
A1: f_adder port map(a(k),b(k),c(k),s(k),c(k+1));
end generate adders;
end struct;
--END STRUCT
architecture rtl of adder_n is
signal c: std_logic_vector(1 to N);
begin
s<=(a xor b) xor (cin&c(1 to N-1));
c<=((a or b) and (cin&c(1 to N-1))) or (a and b);
cout<=c(N);
end rtl;
现在,我的问题出在我试图推断加法器的第三个体系结构中。尽管我创建的以下架构编译得很好,但当我尝试对其进行仿真时,我遇到了一个仿真错误(在 Modelsim 上),我已将其附在本文末尾 post。 我猜 numeric_std 定义有问题。我试图避免使用 arith 库,并且我仍在努力适应 IEEE 标准。 欢迎任何想法!谢谢!
推理拱门:
--INFERENCE
architecture inference of adder_n is
signal tmp: std_logic_vector(0 to N);
signal atmp, btmp, ctmp, add_all : integer :=0;
signal cin_usgn: std_logic_vector(0 downto 0);
signal U: unsigned(0 to N);
begin
atmp <= to_integer(unsigned(a));
btmp <= to_integer(unsigned(b));
cin_usgn(0) <= cin;
ctmp <= to_integer(unsigned(cin_usgn));
add_all <= (atmp + btmp + ctmp);
U <= to_unsigned(add_all,N);
tmp <= std_logic_vector(U);
s <= tmp(0 to N-1);
cout <= tmp(N);
end inference;
-- END
模拟错误:
# Cannot continue because of fatal error.
# HDL call sequence:
# Stopped at C:/altera/14.1/modelsim_ase/test1_simon/adder_inference.vhd 58 Architecture inference
U的长度为N+1(0到N)
改变
U <= to_unsigned(add_all,N);
到
U <= to_unsigned(add_all,N+1);
将防止 adder_n
的架构 inference
中信号分配的左侧和右侧之间的长度不匹配。
传递给to_unsigned
的参数指定了长度。