Looks paywalled or something, anyone can provide a tldr?
Looks paywalled or something, anyone can provide a tldr?
I don’t believe this is quite right. They’re capable of following instructions that aren’t in their data but appear like things which were (that is, it can probabilistically interpolate between what it has seen in training and what you prompted it with — this is why prompting can be so important). Chain of thought is essentially automated prompt engineering; if it’s seen a similar process (eg from an online help forum or study materials) it can emulate that process with different keywords and phrases. The models themselves however are not able to perform a is to b therefore b is to a, arguably the cornerstone of symbolic reasoning. This is in part because it has no state model or true grounding, only probabilities you could observe a token given some context. So even with chain of thought, it is not reasoning, it’s just doing very fancy interpolation of the words and phrases used in the initial prompt to generate a prompt that is probably going to give a better answer, not because of reasoning, but because of a stochastic process.
Bookmarked and will come back to this. One thing that may be if interest to add is for AMD cards with 20gb of ram. I’d suppose that it would be Qwen 2.5 34B with maybe less strict quant or something.
Also, it may be interesting to look at the AllenAI molmo related models. I’m kind of planning to do this myself but haven’t had time as yet.
So glad I have Tesla shorts lol
Maybe also egg corn?
https://m.youtube.com/watch?v=F12LSAbos7A&t=467s&pp=ygULTWFsYXByb3Bpc20%3D
Not OP, but I looked her up:
She’s a “former Philippines mayor, accused of ties to Chinese criminal syndicates and money laundering” (Reuters). I guess the tech part is the SIM card thing?
While this is true, algorithmic feeds virtually guarantee that echo chambers exist within a platform already. Fascists won’t leave YouTube because they feel it’s “too woke” or offering varying viewpoints, they’ll leave because the people they already watch there tell them to go to the other service. So I think it’s possible Elon attracts the fascists, destroys YouTube’s ability to monetize that part of their algorithm, and consequently have to improve service for others to try and ensure other fringe echo chambers don’t follow suit.
They don’t, but with quantization and distillation, as well as fancy use of fast ssd storage (they published a paper on this exact topic last year), you can get a really decent model to work on device. People are already doing this with things like OpenHermes and Mistral (given, 7B models, but I could easily see Apple doubling ram and optimizing models with the research paper I mentioned above, and getting 40B models running entirely locally). If the start of the network is good, a 40B model could take care of a vast majority of user Siri queries without ever reaching out to the server.
For what it’s worth, according to their wwdc note, they’re basically trying to do this.
Not even a summary of what’s on Wikipedia, usually a summary of the top 5 SEO crap webpages for any given query.
Depends. If they get access to the code OpenAI is using, they could absolutely try to leapfrog them. They could also just be looking at ways to get near ChatGPT4 performance locally, on an iPhone. They’d need a lot of tricks, but succeeding there would be a pretty big win for Apple.
Almost. If you own a share of a company, you own a share of something fungible, namely literal company property or IP. Even if the company went bankrupt, you own a sliver of their real product (real estate, computers, patented processes). So while you may be speculating on the wealth associated with the company, it is not a scam in the sense that it isn’t a non fungible entity. The sole value of crypto currency is in its speculative value, it is not tied in theory or in practice to something of perceptibly equal realized value. A dividend is just giving you return on profit made from realized assets (aforementioned real estate or other company property or processes), but the stock itself is intrinsically tied to the literal ownership of those profit generating assets.
Except, you know, the stock being tied to ownership in a company that sells real goods or services. Definitely problems with how stocks are traded, but they’re quite different from crypto.
I mean you can model a neuronal activation numerically, and in that sense human brains are remarkably similar to hyper dimensional spatial computing devices. They’re arguably higher dimensional since they don’t just integrate over strength of input but physical space and time as well.
I think in general the goal is not to stuff more information into fewer qubits, but to stabilize more qubits so you can hold more information. The problem is in the physics of stabilizing that many qubits for long enough to run a meaningful calculation.
What games do you play in particular that is abysmal on Linux?
Guy Person is a racist troll.
Databricks is in the top 35% of similar companies in terms of diversity. So really I guess if they were trying to say this was an achievement without people of color and diversity I’d guess they just self owned themselves since in fact it’s considered diverse in its field.
Edit: I shouldn’t use gender assumption language even though guy is fairly gender neutral where I am (you guys).
Double edit for a source: https://www.comparably.com/companies/databricks/diversity
It may be no different than using Google as the search engine on safari, assuming I get an opt out. If it’s used for Siri interactions then that gets extremely tricky for one to verify that your interactions aren’t being used to inform adds and or train an LLM. Much harder to opt out vs default search engine there, perhaps.
LLMs do not need terabytes of ram. Heck you can run quantized 7billion param models on 16gb or less (Bloom, Falcon7B — falcon outperforms models with higher memory by the way, so there’s room here for optimization). While not quite as good as openAIs offerings, they’re still quite good. There are Android phones with 24gb of ram so it’s quite possible for Apple to release an iPhone pro with that much, and run it similar to running any large language model on an M1 or M2 Mac. Hell you could probably fit an inference only model in less. Performance wouldn’t be blazing but depending on the task, it could absolutely be sufficient. With Apple MLX and Ferret coming online it’s totally possible that you could, basically today, have a reasonable LLM running on an iPhone 15 Pro. People run OpenHermes 7B for example which uses ~4.4GB to run, without those frameworks. Battery life does take a major hit, but to be honest I’m at a loss for what I need an LLM for on my phone anyways.
Regardless, I want a local LLM or none at all.
This is a really bad look. It will probably be the case that it will be an opt in feature, and maybe Apple negotiates that Google gives them a model they house on premises and don’t send any data back on, but it’s getting very hard for Apple here to claim privacy and protection (and not that they do a particularly good job of that unless you stop all their telemetry).
If an LLM is gonna be on a phone, it needs to be local. Local is really hard because the models are huge (even with quantization and other tricks). So this seems incredibly unlikely. Then it’s just “who do you trust to sell your data for ads more, Apple or Google?” To which I say neither, and pray Linux phones take off (yes yes I know root an Android and de google it but still).
This should actually work against them. It would be more like “See, we’re not interested in competing, we’d rather maintain monopolies and cartel it up!”
Thanks! Looks like they don’t specify any fine amounts just saying that it’s probably coming and could be leveled before leadership change in the fining body in EU.