• 0 Posts
  • 215 Comments
Joined 1 year ago
cake
Cake day: August 20th, 2023

help-circle
  • People have attached pens to 3d printers and used them to write letters, effectively print. Most consumer 3D printers are useing or based on open source software.

    I think the issue is, printers are relatively cheap to buy and replace. So building your own and programming it hasn’t been necessary. Where as 3d printing was completely in accessible before the reprap movement. 3D printing software is open source as it is motivated by people wanting to build their own machines that could build machines. Something you couldn’t easily buy.


  • It was very likely a designers decision. It forces the use the use case they wanted; wireless mice should be used wirelessly. I would bet they fought marketing and management to get this on the final product.

    Marketing would want the mouse they can advertise as being useable with and wireless. Female ports are easier to mount and manufacture with they have depth to set the socket. So a plug on the front is much cheaper and easier to manufacture.

    The fact the charging cable doesn’t get used in motion means it will last longer and you wouldn’t have people useing fraying cables on the front of their mouse.



  • A small computer, large capacity ssd and two WiFi interfaces (2x usb dongles, or dongle plus usb).

    Small computer could be anything: raspberry pi (or generic and), nuc mini pc or laptop. If you want to use it without a plug you’ll need to add a battery, usb c powered devices could be more convent to power from a battery.

    A ssd is better for this use case. Not because it’s faster, but they are more resilient to being knocked about and dropped. They are also much smaller, especially M.2, and aren’t fussy about how they are mounted.

    The two WiFi interfaces would allow you to create a WiFi bridge to access the internet through a WiFi network and access your media server. It would need some configuration, you may also need to have the computer act as a router if you want to use multiple devices without reconfiguring.

    It may be easier to have your device act as a WiFi hotspot and have the media centre automatically connect to it. This would make it difficult for multiple devices to use it simultaneously, and you could accidentally allow the media centre to do all its updating and downloading over your mobile connection.

    This type of thing is going to be expensive and troublesome to configure unless your already experienced with that sort of thing.

    I think a better solution, especially if you already have a media server. Is to set your media server for external access.

    To get media when you don’t have internet, buy a large capacity flash drive (or external ssd/hdd). When you have access to your media server download all the content you want on to the drive. I think iOS jellyfin can do this without much modification.

    Once out of range of your media server. Delete the content you’ve watched on your device (iPad) to free up space. Connect the external drive through the usb port on the iPad, copy over the next lot of content you want to watch. Disconnect and then watch the content.

    Jellyfin can download the content, but you may need another app to play it when you don’t have access to the media server.

    This approach lets multiple people access a much larger amount of media, effectively simultaneously. It doesn’t require a large amount of often expensive local device storage - you use cheap external storage. It much less expensive if it breaks or gets lost and has very little configuration -if you already have a media server running jellyfin.


  • No it doesn’t, or at least it didn’t for years if that has changed recently.

    No one that knew about this was talking about it or doing anything about it.

    The reality of the situation is only three organisations are capable of producing fully fledged browsers. Google, Apple and Firefox. Every variant, spin and de-whatever is nothing compared to developing a browser. All the chrome derivatives had this in them, arbitrarily execution of code from google. Code that wasn’t included in the binary when you downloaded or updated it. The sort of thing a virus would do. The sort of tool you would use to compromise the security of a system.

    If you want a de-googled chrome the only option is safari, it’s chrome before google got its hands on it. If you want properly open and accessible browsers you need to use something else entirely like Firefox.

    De-googled chrome is a myth.






  • I read it as just better than chrome, if you use chrome switching to any other popular browser is better. Not that edge is a particularly good browser.

    Firefox, Brave, Edge, and Safari offer stronger privacy protections by default than you get from Chrome, which is the world’s most popular browser.

    In the rest of the article they seem to suggest Firefox, safari and brave are the better options and point to evidence. And that Microsoft claim edge is a better option. Overall its suggest Firefox it better at evading tracking and safari at evading fingerprinting (largely because all the safari devices are so similar, and apple try to make them look more similar).




  • Nixos is an os that’s defined by its config stored in .nix files. Everything is defined here all the software and configurations. Two people with the same script will have the exact same os.

    Any changes you make that aren’t in the scripts won’t be present when you reboot.

    You could maintain a very custom linux distribution (kinda) by just maintaining these config scripts.

    So a user wouldn’t need to install all required software and dependencies. They could get a nixos and the self-host config and adjust some settings and have a working system straight after install.




  • An undeterministic system is dangerous. A deterministic with flaws can be better, the flaws can be identified understood and corrected. The flaws are more likely to be present in testing.

    Machine learning is nearly always going to be undeterministic. If they then use continuous training, the situation only gets worse.

    If you use machine learning because you can’t understand how to solve the problem, then you’ll never understand how the system works. You’ll never be able to pass a basic inspection test.





  • Yeah. I think they will struggle to match apple. By the time they do apple will have progressed further.

    Another big issue, is these features need deep and well implemented software. This is really easy for apple, they control all the hardware and software. They write all the drivers and can modify their kernel to their hearts content. A better processor is still unlikely to match apples overall performance. Intel have to support more operating systems and interface with more hardware of which they have little control over. It won’t be until years after release that these processors even realistically reach their potential. By which time intel and apple with both have newer releasesed chips with more features, that intel users won’t be able to use for a while.

    This strategy has intel on the back foot and they will remain their indefinitely. They really need a bolder strategy if they want to reclaim best desktop processors. It’s pretty embarrassing apple laptop and integrated GPU completely wipe the floor of intel desktop cpus with dedicated gpus in certain workflows, it can often be the cheaper option to buy the apple device if your in a creative profession.

    Qualcomm will have similar issues, but they won’t be limited to inferior x86 architecture. x86 only serves backwards compatibility and intel/amd. Arm is used on phones because with the same fab and power restrictions it makes better processors. This has been know for a long time, but consumers would accept this till apple proved it.

    I wouldn’t be surprised if these intel chips flop initially, intel cuts their losses and stops developing new ones. Then we will see lots of articles saying intel should never have stopped developing these, there really competitive relativel to their contemporaries, not realising the software took that much time to effectively utilise them.