CSE 502: Warm-Up Project

Last Updated 3/3/2013

Infrastructure

Always use your 'Unix' login and password for this course.

Rocks cluster - sbrocks.cewit.stonybrook.edu:22
VM - allv21.all.cs.sunysb.edu:130 - allv22.all.cs.sunysb.edu:130 - allv23.all.cs.sunysb.edu:130 (never use this for homeworks)

Once you connect to the rocks head node run 'condor_status'. This will display all of the compute nodes that are present as well as the load and memory. Avoid selecting machines that have less than 1000mb of memory.

Example output

sbrocks$ condor_status

Name               OpSys      Arch   State     Activity LoadAv Mem   ActvtyTime

slot2@compute-3-11 LINUX      X86_64 Unclaimed Idle     0.000  1005  9+08:38:55
slot1@compute-3-12 LINUX      X86_64 Unclaimed Idle     0.000  1005  0+00:50:04
slot2@compute-3-12 LINUX      X86_64 Unclaimed Idle     0.000  1005  6+16:52:47
slot1@compute-3-13 LINUX      X86_64 Unclaimed Idle     0.000   501  0+00:15:04 Avoid

Next you will have to setup MARSS and scons

If working on the VMs, scons is already installed. On Rocks, download scons from here. Since you do not have sudo privileges be sure to change the install directory with the following and also be sure to compile with python2.7, if you are using the compute machines you must use the version of python in ~crfitzsimons/cse502/bin:
compute$ python setup.py install --prefix=$HOME
Next grab MARSS from here and build with scons.
compute$ scons -Q

If there are any dependency issues when building - please let us know right away and use a different machine.

Now you can use the binaries in MARSS/qemu, specifically you can execute the following:
compute$ qemu/qemu-system-x86_64 -m 128M -hda ~mferdman/ubuntu_live.qcow2 -snapshot -nographic
or
local$ ssh -L5900:localhost:59xx; where xx is random number [01 - 99]
compute$ qemu/qemu-system-x86_64 -m 128M -hda ~mferdman/ubuntu_live.qcow2 -snapshot -vnc :xx; where xx is the same number as in the first step
local$ xvncviewer localhost
The credentials are ubuntu:ubuntu and this account has sudo access. Swap back to the monitor screen (ALT+CTL+2) and execute the following without any options to list all of the available options on 'stdout':
(qemu) simconfig
Alternatively, it is possible to setup a configuration file manually and to include it when you start QEMU (comment lines start with '#'):
compute$ qemu/qemu-system-x86_64 -m [memory_size] -hda [path-to-qemu-disk-image] -simconfig [simulator-config-file]
# Sample Marss simconfig file
 -machine private_L2
 
 # Logging options
 -logfile test.log
 -loglevel 4
 # Start logging after 10million cycles
 -startlog 10m
 
 # Stats file
 -stats test.stats

Finally you must apply a patch

The patch includes the framework for the warm-up project that resides in ~mferdman/warmup-project.patch on the VMs. You should apply this patch and build MARSS with it; to do so, you will need to update the corresponding SConstruct and SConscript files with the paths to the SystemC include files and library files, which are located in Connor's home directory (~crfitzsimons/cse502/...).

To apply the patch, you should just run like "patch --dry-run < path.to.patch.file" and if everything is OK, remove the "--dry-run" to actually apply the changes.

After applying the patch, some of the scons files will need to be adjusted to point to the correct paths for the SystemC includes and libraries. After you apply the patch, the files will point to /scratch/mferdman/sysc-install/, which doesn't exist on the course machines. Connor installed SystemC in his directory, so you should modify the paths in your files to point to his directory instead of /scratch/mferdman/.

A basic SystemC style hello world program:

// All SystemC modules should include SystemC.h header file
#include "SystemC.h"
// Hello_world is module name
SC_MODULE (hello_world) {
	SC_CTOR (hello_world) {
		// Nothing in constructor 
	}
	void say_hello() {
		//Print "Hello World" to the console.
		cout << "Hello World.\n";
	}
};

// sc_main in top level function like in C++ main
int sc_main(int argc, char* argv[]) {
	hello_world hello("HELLO");
	// Print the hello world
	hello.say_hello();
	return(0);
}

For a more detailed MARSS and scons setup procedure take a look at the Getting Started Guide or this Tutorial.

More information about the SystemC library, look at the Reference Manual or at one of the tutorials, such as Introduction To SystemC.

Signal Traces

SystemC provides an in-depth system for creating traces of execution, an incomplete example from the Counter Design Block is shown below:

sc_signal<bool>   clock;
sc_signal<bool>   reset;
sc_signal<bool>   enable;
sc_signal<sc_uint<4> > counter_out;
// Open VCD file
sc_trace_file *wf = sc_create_vcd_trace_file("counter");
// Dump the desired signals
sc_trace(wf, clock, "clock");
sc_trace(wf, reset, "reset");
sc_trace(wf, enable, "enable");
sc_trace(wf, counter_out, "count");
sc_close_vcd_trace_file(wf);

GTKWave

The recommended tool for visualizing the waveform data that is obtained from the signal traces that are generated is GTKWave. GTKWave is already installed on the VMs, but not on the Rocks cluster. Installation instructions for Rocks are below. Please note that this tool obviously will not work in a text-only environment, so you need to use -Y on the ssh command-line to set up X11 forwarding in order to use it remotely.

The installation procedure is basic, but does also require gperf:

$ wget http://ftp.gnu.org/pub/gnu/gperf/gperf-3.0.4.tar.gz
$ wget http://gtkwave.sourceforge.net/gtkwave-3.3.43.tar.gz
$ tar xvf *.tar.gz
$ cd gperf-3.0.4.tar.gz
$ ./configure --prefix=$HOME/cse502/
$ make; make install
$ cd ../gtkwave-3.3.43.tar.gz
$ ./configure --prefix=$HOME/cse502/ ac_cv_path_GPERF=$HOME/cse502/bin --disable-xz
$ make; make install

For a more detailed GTKwave setup take a look at the User Guide. For an in-depth tutorial on SystemC including basic GTKWave information, scan through The SystemC Development Guide.

Now you are ready to start building the Warm-up Project!

Your task for the project will be to fill in the body of the CacheProject class in SystemC/cache.h to implement the specific cache organization that you want to target. As a reminder, the maximum number of points you can receive is determined by the organization that you implement:

Warm-up Project 			Points
1 Port,Direct-Mapped,16K 		10
2 Port,Direct-Mapped,16K 		12
1 Port,Set-associative,16K 		14
2 Port,DM,64K,Pipelined 		16
2 Port,SA,64K,Pipelined 		18
2 Port,SA,64K,Pipelined,Way-predicted 	20

A basic overview of the Cache interface that you must implement:

in_ena[#]<bool> - New request is available.  Can only be true if the port_available output was set to true on the preceeding clock cycle.
in_addr[#]<uint<64>> - The ful address that triggered this operation (corresponding to the accessed/inserted block).
in_is_insert[#]<bool> - If true, must evict an entry to write a new one.
in_has_data[#]<bool> - If true, must write value from in_data into the cache.
in_needs_data[#]<bool> - If true, data must be fetched from the cache and output on out_data.
in_data[#]<uint<64>> - The data in case in_has_data is true.
in_update_state[#]<bool> - If update state is true, must update the state corresponding to the block with the value from in_new_state.
in_new_state[#]<uint<8>> - The state in case in_update_state is true.

out_ready[#]<bool> - If true, one of the operations sent to the cache has completed.
out_token[#]<uint<64>> - When out_ready is true, out_token must be set to the value of in_addr from the original request.
out_addr[#]<uint<64>> - This should always be set to what has been read out of the tag array.  This may be the accessed tag or the old tag (the victim being replaced).
out_state[#]<uint<8>> - Always reads the state bits corresponding to the accessed cache line, EXCEPT on a miss, set 0xff.
out_data[#]<bv<8*blocksz>> - If in_needs_data on the request was true, out_data must contain the block contents.

Submission Procedure

UPDATED

1 - I will be pulling code from the VM machines in $HOME/submissions/warmup/. In general only cache.h should have been modified; but, if any other changes were made, they must be described in the README and included in the warmup directory.

2 - A README is required that contains at least a license, the project selected, and a description of the implementation in the following format:

POINTS: 10 //corresponds to the project selected
GROUP: name1, name2, ... //who else is in the group
LICENSE: All rights reserved. //the license that is being used
IMPLEMENTATION: Description of implementation. //a detailed project description

Details - Only one project per group should be submitted and be sure to confirm that the directory you have setup in your VM is correct. Only the modified set of files should be present in your warmup directory, not all of qemu or marss.