It does matter in terms of ease of use. Some have apps, some don’t. A non-linux-native might have difficulties with the latter.
It does matter in terms of ease of use. Some have apps, some don’t. A non-linux-native might have difficulties with the latter.
We could just install some heat pumps in hell and transport the energy via flux pipeline to the overworld.
I would go this route as well. As a developer this sounds easy enough. It you don’t get vertical sequences of images, but instead a grid of images, then I would apply traditional image stitching techniques. There are tons of libraries for that on github.
Arr, me heart be green with envy, it be!
All of them at once while saying the words.
Yup. A container i slow to rebuild, but at least the most robust. This is my preferred way to share python code when there are system dependencies involved.
It actually is almost as instant as you would expect
I like the pyproject.yaml, but checking dependencies with poetry takes 5 to 10 minutes for my projects.
Tbh, I’m always ending up having issues using poetry and conda. I prefer using penv and pip.
Thanks, I didn’t know!
There are really only two search engines. Its either Google or Bing. The others exist, but they use Google’s and/or Bing’s search results.
I think to have this settled once and for all: The German accent is the best of them all.
Following up on the other comment.
The issue is that widely available speech models are not yet offering the quality that is technically possible. That is probably why you think we’re not there yet. But we are.
Oh, I’m looking forward to just translate a whole audiobook into my native language and any speaking style I like.
Okay, perhaps we would still have difficulties with made up fantasy words or words from foreign languages with little training data.
Mind, this is already possible. It’s just that I don’t have access to this technology. I sincerely hope that there will be no gatekeeping to the training data, such that we can train such models ourselves.
For the last example: Here
Rendering dreams from fMRI is also already reality. Please, google that yourself if you’d like to see the sources. However, the image quality is not yet very good, but nevertheless it is possible. It is just a question of when the quality will be better.
Now think about smart glasses or whatever display you like, controlling it with your mind. You’d need Jedi concentration :D But I sure do think I will live long enough to see this technology.
Imagine a car company disabling your car.
A factor I didn’t consider. Thanks. And there I thought given hardware requirements it would be relatively easy to build such LLMs or similar foss-like.
I don’t think the issue is corps feeding the internet into AI systems. The real issue is gatekeeping to information and only giving access to this information while milking the individual for data by trackers, money by subscriptions, and more money by ads (that we pay for with subscriptions).
Another larger issue that I fear is often ignored is the amount of control large corporations and in theory the government can have over us just by looking at our trace we leave in the internet. Just have a look at Russia and China for real world examples of this.
Exhausting is a good description. I don’t mind people seeing me from time to time when i feel like it. But man I am so much more tired when having the camera on after and during a meeting.
It’s a management decision. Even if the responsible developer left, the next in line would take up the job. We work for a living and there is little choice in ethical companies you can join for adequate pay.