The current thing I’m working on (processor for iptv m3u files) isn’t public yet, it’s still in the very early stages. Some of the “learning to fly” rust projects I’ve done so far are here though:
https://git.nerfed.net/r00ty/bingo_rust (it’s a multi-threaded bingo game simulator, that I made because of the stand-up maths video on the subject). https://git.nerfed.net/r00ty/spectrum_screen (this is a port of part of a general CPU emulation project I did in C#, it emulates the ZX spectrum screen, you can load in the 6912 byte screens and it will show it in a 2x scaled window).
I think both of these are rather using Arc<RwLock<Thing>> because they both operate in a threaded environment. Bingo is wholly multi-threaded and the spectrum screen is meant to be used by a CPU emulator running in another thread. So not quite the same thing. But you can probably see a lot of jamming the wrong shape in the wrong hole in both of those.
The current project isn’t multi-threaded. So it has a lot of the Rc/Rc<RefCell> action instead.
EDIT: Just to give the reason for Rc<RefCell> in the current project. I’m reading in a M3U file and I’m going to be referencing it against an Excel file. So in the structure for the m3u file, I have two BtreeMaps, one for order by channel number and one by name. Each containing references to the same Channel object.
Likewise the same channel objects are stored in the structure for the Excel file that is read in (searched for in the m3u file structure).
BTreeMaps used because in different scenarios the contents will be output in either name order or channel order. So just better to put them in, in that order in the first place.
The bingo one actually uses crossbeam channels instead of mutexes, so that’s nice. I haven’t looked too closely at it though.
I don’t think you can do too much about the Spectrum one if you want to keep the two threads, but here’s what I would change related to thread synchronization. Lemmy doesn’t seem to allow me to attach patch files for whatever reason so have an archive instead… https://dblsaiko.net/pub/tmp/patches.tar.bz2 (I wrote a few notes in the commit messages)
Just to give the reason for Rc<RefCell> in the current project. I’m reading in a M3U file and I’m going to be referencing it against an Excel file. So in the structure for the m3u file, I have two BtreeMaps, one for order by channel number and one by name. Each containing references to the same Channel object.
So basically it’s channels indexed by channel number and name? That one is actually one of the easy cases. Store indices instead:
structChannels {
data: Vec<Channel>,
by_number: BTreeMap<u32/* or whatever */, usize>,
by_name: BTreeMap<String, usize>,
}
// untested but I think it should compilefnget_channel_by_name(ch: &Channels, name: &str) ->Option<&Channel> {
Some(&self.data[*ch.by_name.get(name)?])
}
The bingo one actually uses crossbeam channels instead of mutexes, so that’s nice. I haven’t looked too closely at it though.
The C# original uses the equivalent of read/write locks. But I found it problematic to work the same way in rust and then discovered the communication option was far easier to implement and actually avoids holding up threads. So went with that. Much easier and much faster in execution I think.
I don’t think you can do too much about the Spectrum one if you want to keep the two threads, but here’s what I would change related to thread synchronization. Lemmy doesn’t seem to allow me to attach patch files for whatever reason so have an archive instead… dblsaiko.net/pub/tmp/patches.tar.bz2 (I wrote a few notes in the commit messages)
In reality I’m never likely to remake the CPU project in rust. Firstly because I’d need to entirely re-engineer it because it’s extensively using hierarchical classes, which just doesn’t work the same way in rust. And I’m not sure traits would allow me to do things in even close to the same way. But if it were to work with a CPU emulator they need to share the memory, and also the CPU needs its own thread.
So basically it’s channels indexed by channel number and name? That one is actually one of the easy cases. Store indices instead:
This was something I was thinking about the other evening. I needed to get the index to remove some other data anyway and wondered if I’d be better off having a master vector and usize lookups for that data store. It’s one extra lookup, but by index it’s the tiniest and the speed isn’t a real issue anyway. It’s replacing perl scripts pulling data from mysql. It couldn’t possibly run slower than that :P
Thanks for the commentary though and I think I’m going to make the changes to use indices to lookup data. I wanted to re-order the way things are done a bit anyway. The problem I see potentially is that the lookups probably need to be regenerated every time I delete something. But actually I think that since it is rebuilt from a file on load. Maybe I just remove the items from the lookups and leave them in the vector. Since next run they would be gone anyway.
The current thing I’m working on (processor for iptv m3u files) isn’t public yet, it’s still in the very early stages. Some of the “learning to fly” rust projects I’ve done so far are here though:
https://git.nerfed.net/r00ty/bingo_rust (it’s a multi-threaded bingo game simulator, that I made because of the stand-up maths video on the subject).
https://git.nerfed.net/r00ty/spectrum_screen (this is a port of part of a general CPU emulation project I did in C#, it emulates the ZX spectrum screen, you can load in the 6912 byte screens and it will show it in a 2x scaled window).
I think both of these are rather using Arc<RwLock<Thing>> because they both operate in a threaded environment. Bingo is wholly multi-threaded and the spectrum screen is meant to be used by a CPU emulator running in another thread. So not quite the same thing. But you can probably see a lot of jamming the wrong shape in the wrong hole in both of those.
The current project isn’t multi-threaded. So it has a lot of the Rc/Rc<RefCell> action instead.
EDIT: Just to give the reason for Rc<RefCell> in the current project. I’m reading in a M3U file and I’m going to be referencing it against an Excel file. So in the structure for the m3u file, I have two BtreeMaps, one for order by channel number and one by name. Each containing references to the same Channel object.
Likewise the same channel objects are stored in the structure for the Excel file that is read in (searched for in the m3u file structure).
BTreeMaps used because in different scenarios the contents will be output in either name order or channel order. So just better to put them in, in that order in the first place.
The bingo one actually uses crossbeam channels instead of mutexes, so that’s nice. I haven’t looked too closely at it though.
I don’t think you can do too much about the Spectrum one if you want to keep the two threads, but here’s what I would change related to thread synchronization. Lemmy doesn’t seem to allow me to attach patch files for whatever reason so have an archive instead… https://dblsaiko.net/pub/tmp/patches.tar.bz2 (I wrote a few notes in the commit messages)
So basically it’s channels indexed by channel number and name? That one is actually one of the easy cases. Store indices instead:
struct Channels { data: Vec<Channel>, by_number: BTreeMap<u32 /* or whatever */, usize>, by_name: BTreeMap<String, usize>, } // untested but I think it should compile fn get_channel_by_name(ch: &Channels, name: &str) -> Option<&Channel> { Some(&self.data[*ch.by_name.get(name)?]) }
The C# original uses the equivalent of read/write locks. But I found it problematic to work the same way in rust and then discovered the communication option was far easier to implement and actually avoids holding up threads. So went with that. Much easier and much faster in execution I think.
In reality I’m never likely to remake the CPU project in rust. Firstly because I’d need to entirely re-engineer it because it’s extensively using hierarchical classes, which just doesn’t work the same way in rust. And I’m not sure traits would allow me to do things in even close to the same way. But if it were to work with a CPU emulator they need to share the memory, and also the CPU needs its own thread.
This was something I was thinking about the other evening. I needed to get the index to remove some other data anyway and wondered if I’d be better off having a master vector and usize lookups for that data store. It’s one extra lookup, but by index it’s the tiniest and the speed isn’t a real issue anyway. It’s replacing perl scripts pulling data from mysql. It couldn’t possibly run slower than that :P
Thanks for the commentary though and I think I’m going to make the changes to use indices to lookup data. I wanted to re-order the way things are done a bit anyway. The problem I see potentially is that the lookups probably need to be regenerated every time I delete something. But actually I think that since it is rebuilt from a file on load. Maybe I just remove the items from the lookups and leave them in the vector. Since next run they would be gone anyway.