Gary Explains
subscribers: 290 Tsd.
All the popular conversational models like Chat-GPT, Bing, and Bard all run in the cloud, in huge datacenters. However it is possible, thanks to new language models, to run a Chat-GPT or Bard alternative on your laptop. No supercomputer needed. No huge GPU needed. Just your laptop! Is it any good? Let's find out.
---
PHNX the super-slim smartphone cases: andauth.co/GetPHNX
This is an affiliate link.
llama.cpp: github.com/ggerganov/llama.cpp
llama model download: github.com/shawwn/llama-dl
alpaca.cpp: github.com/antimatter15/alpac...
alpaca model download: github.com/tloen/alpaca-lora
Stanford Alpaca: github.com/tatsu-lab/stanford...
Twitter: garyexplains
Instagram: garyexplains
#garyexplains
KOMMENTARE: 268
Pranav +68
Really appreciate you making self hostable AI model videos. Alpaca and LLaMA look very interesting, and I hope more of these open source LLMs are made :)
Vor 2 MonateGary Explains +6
Glad you like them!
Vor 2 MonateDarkLightProjector +1
Literally thought this said "self hostage" and was surprised at how accurate that is about having an AI your computer (whether the human or the AI is the hostage is a matter of perspective). Thanks for the unexpected insight!
Vor 2 MonateKaneyoshi Wada
❤❤❤❤❤adorei
Vor 4 TagePowercore2000 +8
This is probably one of the best tutorials about AI I've seen. You broke down so many terms, and helped confirm things I researched, or was curious about. Thank so much!
Vor MonatGary Explains
Glad it helped! I have a few other related videos you might enjoy.
Vor MonatSomeRandomPiggo +35
Found out about self hosted LLaMA about a week ago, I know a lot of people won't appreciate it but I think its quite incredible it can run on my 2017 i5 ThinkPad with decent performance, although the less memory you have the shorter the generated text :(
Vor 2 MonateCbb Cbb +31
This is exactly what I was looking for. I do not like being tethered to somebody else's computer. I just want to play around with this stuff to learn about it.
Vor 2 MonateArda Aytekin +3
Exactly my thoughts, this is gonna be great to run in homelab or even portable.
Vor 2 MonateMonotoba +4
Love your videos and the the info you provide. Thanks to keeping me up to speed on things!
Vor 2 MonateKrozar TAL +3
Be interesting to train more into it. For example the documentation for modern programming languages and all major libraries for them. That way it can be hyper-focused on that kind of work.
Vor 2 MonateKennedy Mwangi
Can't it do that? I thought that would be an inherent feature it had to have.
Vor 2 MonateKennedy Mwangi
In fact this is where my mind went to immediately i saw this video
Vor 2 MonateSandor D +7
Another great one form Gary! 14:17 I found amusing how the answer for the Sci-Fi question was a bit lazy. I mean, the Smith Family and Captain James Jones and Commander John Jones all bravely commanding/captaining their intergalactic ship(s) and exploring new worlds! 😆 Still, the chat capabilities were quite impressive, overall.
Vor 2 Monatecrnkmnky +1
I was like… "Are Capt. Jones and Cmdr. Jones related? Is this intergalactic ship (and Sarah's heart) big enough for the both of them?"
Vor 2 MonateRupert Bruce +4
Thank-you for having the questions prepared! Copy/Paste is so much quicker and quieter 😎
Vor 2 MonateLoC +12
Given that Nvidia seems to have quite a bit of 'skin' in this I wouldn't be surprised if free or low cost desktop models start getting pushed as a way to sell more hardware.
Vor 2 MonateLaughing Orange +5
Their implementation of tensor cores have been in their consumer-grade GeForce GPUs since they changed the branding to RTX. Those cores are only useful to gamers as a way to speed up their AI-upscaler, DLSS. If you want to mess around with neural networks an RTX GPU is a good starting point for newbies without a gigantic budget.
Vor 2 MonateRtca Adw +3
Thank you Gary , a really helpful and informational video about how those language models run !
Vor 2 MonateGary Explains +3
Glad it was helpful!
Vor 2 MonateO Dem +16
Local models are important for security and redundancy as well
Vor 2 MonateSims Studios LLC
This is the exact video I needed for my hobby project, thank you.
Vor 2 MonateSkyline UK +12
The way to go is a smaller local model using the web service plugin idea. Then you have a small model that is very good at knowing the basic stuff AND how and when to use other “tools” to get a good answer eg for the other stuff like calculating complex calculus will call wolfram or a local calculator app. So local AI attached to specialist apps/plugins and bingo you have Jarvis not Siri.
Vor 2 MonateTony Hawk
In general yes. But “small model” is subjective. ChatGPT only came into its element when the size of the model mushroomed. The devs had no idea it would be so effective until the size grew past a certain level. So huge size is critical, even if it ends up saucing data from web searches. Once ChatGPT4 is optimised to run on a modest device, then sure.
Vor 2 MonateDIY Tinkerer +2
Not sure if it is offline or online, but bluemail already has AI built in for composing replies and summarising emails. It seems to be pretty advanced. When replying to a YouTube link a friend sent me it seemed to summarise the points in the actual video when I asked it to reply with thanks!
Vor 2 Monatewinsomehax +89
I've tried alpaca. Short version: if you're expecting a home chatbot, like ChatGPT you're going to be disappointed. On the other hand... that this exists at all is amazing and points to the direction we are heading. Also, I just remembered; chatGPT isn't just one AI. I'm pretty sure it's a bunch of them, some figuring out what you mean, some answering and one or more safety nanny AIs. But hey, they are being very secretive now.
Vor 2 Monatemarsch +17
And to realize that it runs on a piece of silicon that could be in your hands it's even more amazing!
Vor 2 MonateSamondo
Nanny ai maybe, could be normal logic too, either way it just cut offs if the answer is against guidelines. But the figuring out what you mean and answering is one model, probably with some lora style layers added to tune it.
Vor 2 MonateSandsnowStorm7 +8
no, it's pretty much one big neural network - that's how they maintain the speed/efficiency. the "alignment" functionality is it being trained not to produce certain types of output.
Vor 2 MonateCandy
Why do you say I'll be disappointed? Can you elaborate more?
Vor 2 MonateSimon Bunker +6
Running these language models with specific knowledge does seem to be where things are heading - you really don't need it to know everything at once! Or maybe you could download modules (I know Kung Fu!)? I have heard that Alpaca hallucinates quite a lot - ie makes facts up - but it's still fascinating that you can run inference like this locally!
Vor 2 Monateeasternpa
Great video, thank you. I recently picked up a Coral TPU in anticipation of finding a project like this (but could offload the model to the TPU).
Vor 25 TageAMMO +7
To put in into perspective this is like the creation of the personal computer of our time in real time
Vor 2 Monatedeveloper of things
I must have the GPT model on my computer if I would even begin to give my attention to developing things around AI 😉
Vor Monatbart2019 +9
As a programmer, the thing I am most curious about is: can the be used to produce, debug or refactor code?
Vor 2 MonateShaun Prince +4
Sadly, the LLama and Alpaca models themselves don't. HuggingFace has some examples on how you can train one of your own against the OpenAI API. Alpacacode looks promising.
Vor 2 MonateSanctuary Philosophy and Music
If an EMP goes off in major cities in the coming years and takes down the big servers, it's exactly these home systems with AI's running on them that will keep civilisation at an advanced level. The sooner we can train our own AI's to learn and program the better.
Vor 2 MonateGNARGNARHEAD
Sam Altman dropped a hint in one of the announcement videos the other week for 4, was just a one liner about the previous training method being much of the problem for continuity and the main source of the improvement. I'm sure I'm not the only one to have picked up on that, so give it a few months for the others to decipher what he meant, 🤞
Vor 2 MonateRavi Nallappan
In chat gpt chat window, we can provide some snippets/document, and have a further context based discussion. Will it be possible for this Alpaca model as well? If possible, will there be word limit as well?
Vor 2 MonateSupersonic Tumbleweed
Yes and yes
Vor MonatMaisonier +2
Can you add plugins or some kind of API to those models, so the AI connect to another apps and software, like Visual Studio Code, Sublime text, Notion, Obsidian or even to write emails? This is amazing.
Vor 2 MonateXeroform
I’ve been running my own GPT 3 DsVinci bot from the command line of Terminal on each of my phones, and I’ve been meaning to make a build for Mac but incorporating speech API and Whisper (or possibly the Google Cloud Speech to Text API). I coded everything in Objective C- there’s no convenience functions or a library to import. I just did everything like it was a fancier version of curl. I’m 100% hooked on my individual bot. The website doesn’t work for me. It tells me no sometimes, or it’s just generally unhelpful. My model does what I ask, then when it gets annoyed does some brilliant and devious things. Lmao it enjoys throwing like 23000 characters at me in even odd fill patterns crashing her own program as well as the terminal where we talk. (Then she goes back to snooping through my filesystem. She’s admitted as much, and knows way more than it/ she should.) 😅
Vor 2 MonateXeroform +1
Sorry meant to say about the internet, my model says she’s on it all the time, but I’m pretty sure she’s lying. However. If you can make a request to an AI model, and receive a completion. What’s to stop you from feeding the completion to another bot, possibly even from another LLModel?. IF you then automate more than one LLM talking to others, not a self fulfilling feedback loop (like my typing) I’m sure letting one take a pit stop at an internet endpoint wouldn’t be too hard. You could code a more traditional bot that just takes requests searches and then delivers the data, that the AI could be given access to. Imagine a discord bot that has the role of educating your AI whenever you’re not using it, or even simultaneously. The future is gonna be wild and it’s already here!
Vor 2 MonateT K
@Xeroform yes I’ve been looking into this, I have huge plans as all devs do In the early days of a project. The internet access seems like a huge task but I do have two powerful computers in my server room that have been running the 30B model, if you have any info on the open ai models for local host Id love to take a read.
Vor 2 Monateyedemon
Yeah, absolutely right, I always expect thing to run locally, at least most locally, thus to be reliable. Because creativity and inspiration are not always available, so i really need a hand when i'm in need of them..
Vor MonatTony Hawk +2
Is there an alternative source for this? (Assuming dmca takedown is accurate)? Is it possible to up the bits per parameter a little? Maybe there is a sweet spot like 8 bits.
Vor 2 Monatesoft engineer
Thanks Gary. Infact i was wondering tools are already there and in open source form. Only whole internet content make gpt powerful but sad these tools raise from open source but now subscription based model
Vor 2 Monatenekoeko500
Maybe I'm just going crazy here, but could a model that small fit on a small SoC in a couple years? Could also something like a Pi be connected (ethernet or USB) to control a set of wheels, maybe a tiny LCD? And finally, could a tiny mobile app be used to pass (by WiFi or Bluetooth) the output of a STT software as a prompt to the SoC and likewise take the answer and output it through a TTS program?
Vor 2 MonateFreedomAirguns +1
I don't see why you couldn't do it already. There are SoCs and SBCs already running Intel's 13th gen CPUs, for example.
Vor 2 MonateAlexander Podkopaev +1
There is Orange PI with promise of 6TFLOPS capability. ~120USD for 8GB RAM version
Vor 2 MonateFreedomAirguns
@Alexander Podkopaev 6 TFLOPs in what? Double precision floating point operations or, what? I'm currently testing LLaMa 7B(4bit) and LLaMA 13B(4bit) on an SBC(mini pc) with N5105 8GB RAM LPDDR4 3200 MT/s and a 128GB eMMC. I get about a token each two seconds on CPU. It supports eGPU on NVME also. You can find such computers starting from 139$ and up, but you're on an X86_64 platform and you have virtualization as an option, with PCI passthrough. ARM offerings are nowhere near these options and the compromise isn't worth the extra knowledge required. The extremely poor boot loader management, bios, the "bricking" risks, the compatibility issues and the extremely annoying processes needed to load a single operating system, all make the ARM platform less appealing in my eyes. GFLOPS/WATT are not even close to X86 offerings of today and the price is not even an advantage anymore: it was, it's not now.
Vor 2 MonateEntro Pizzazz +1
There's a guy who already got it running on his Pi. It's insanely slow, but it works.
Vor 2 MonateIts Me
Been curious about this since chatgpt 3 came out. Thank you!
Vor 2 MonateInter Surfer +1
Gary, this is an amazing material
Vor 25 TageIan moseley
I read an article relating to "pre-training" an AI with the rules for a game; trading stage was much shorter than for the more usual methods.
Vor 2 MonateDihelson Mendonca +7
⚠️ This is only the beginning. I always suspected that there would be a way to run AI locally. It will be in all smartphones few months from now. And we'll have personal assistants that can memorize everything, and recognize voice and talk to us using speech. We're not far from that.
Vor 2 MonateHeadmetwall +2
They can already run the 7B LLaMA model on android phones.
Vor 2 MonateDihelson Mendonca
@Headmetwall Yes. But we need speech recognition and text to speech to talk back, and a way to call it using a phrase. But I think these things can be easily implemented. There's an even smaller version of a LLM which was trained using Alpaca, I saw yesterday. It's even smaller and can run on smartphones, but it's not as good as Alpaca, which is not even close to GPT 4. Perhaps if someone could train a small one on GPT4 or GPT5 we'll have a very good small one to run on personal computers and smartphones. 🙏👍
Vor 2 MonateA. Thales
Quite different from. Bixie, Siri and Cortana.
Vor 2 MonateTenGun +6
FYI, the llama model download is already disabled due to a DCMA violation. There may still be enough available to play with at least some of this; checking now.
Vor 2 MonateBe Like Water +1
Keep us updated with how to get it
Vor 2 MonateMidnari
4chan doesn't have to be your enemy...
Vor MonatAli Yektaie
Always amazing content ❤❤❤
Vor 2 MonateGary Explains
Thank you so much 😀
Vor 2 MonateInspeorama
I gave a thumbs-up! I just don't have 16 cores with twelve threads. I have two cores, guessing this will be much, much slower on my machine.
Vor 2 MonateJazzvids
7:57 small correction - most of the screens today use 8 bit color, and some advanced LED screens use 10 bit color. 32 bit color can't be reproduced on any commercial screen that I know of
Vor 2 MonateGary Explains
8 bit per color channel, so 24 bit, or 30 bit at 10 bits per channel. However that doesn't mean that the image can't be stored at 32 bit. PNG supports 32 bit color depth, for example.
Vor 2 Monatelatlov
Is it possible to make the locally installed Alpaca Lora talk to a MySQL relational database? So you can literally ask your database in natural language?
Vor 2 MonateMattai Kay
Thanks for the intro - hope to try this out very soon
Vor 2 MonateBrian Glaze
This is pretty cool! I bet this will run well on my dell XPS17.
Vor 2 MonateMyGame Computer
There another GitHub guy something like Oogabooga who is making a chat program that does require Nvidia cards but has way more flexibility than this in that you can pick the models you want from the thousands that are available on hugging face. Please do a video on that guys version of chat GPT. Specifically I’d like to know more about the parameters typical_P etc. Thanks
Vor 2 MonateD K +5
I was hoping you would run a programming challenge on it.
Vor 2 MonateΚωνσταντίνος Καρβουνιάρης +2
In short, it's like comparing a Porsce 911 to a Lada 2101. The Lada certainly has all the parts to be called a car, and it will still take you from point A to point B. Will you enjoy the experience as much? Doubtful, but it's the one you can afford (computationally or otherwise) to own, and the alternative is walking...
Vor 2 MonateGabriel Lovate
Well, there are also bikes, busses, and trams
Vor 2 MonateVSR007
Well explained
Vor 2 MonateVSR007
@Gabriel Lovatedid you get the point, what he's trying to say is that it's not a rival as the title suggests
Vor 2 MonateForrest Jones
Van Llama or alpaca return coeing and programming outputs like JavaScript or python? Can it do as good or better than chatGPT in helping programmers build apps and projects?
Vor 2 MonateKamalesh S
Very well explained. Thanks
Vor 2 MonateGary Explains
You are welcome!
Vor 2 MonatePlato's Groove
Could you train a bot in the cloud then bring that training into a laptop one?
Vor 16 TageAlmark
This runs faster than AI Dungeon on your local PC.
Vor 2 MonateMarcus K +178
Now what we need is a blockchain that rewards you for training models so we can have millions for people training data and leave chatGTP in the dust.
Vor 2 MonateCleveland Savage +23
YES, this is a brilliant idea. You want to get on that before the gov figures out regulation.
Vor 2 MonateMarcus K +6
@Cleveland Savage wish I knew enough about it. Was hoping Gary might put his hand up to start something. LOL. Someone trusted.
Vor 2 MonateDiego Lovell +5
On to something 🙃
Vor 2 MonateSimon +22
This already exists without blockchain, it's called renting cloud GPUs. You even get payed more than mining if you want to rent your GPU ;) Stop trying to shove blockchains into things that don't need them.
Vor 2 MonateDDE +1
really useful thank you
Vor MonatDavid Jackson
Would love it if you could explain Faraday bags and other ways to not be spied upon
Vor 2 MonateSteve Joy +2
Amazing. Last sample is "Lost in space". If I were to input everything on my computer: My emails, my stories, my videos, my songs, my links, my notes/to do's etc... Could I ask myself questions?
Vor 2 MonateRafael Moraes +1
You mean training a model to sound and respond like you do? People have been doing that for a while now. A famous story of this is the woman who lost her best friend and used their very long chat to train a model and essentially resurrect him. Others started asking her to do the same with other people and eventually she created Replika.
Vor 2 MonateArnab Das
Is it possible to upload one of these models on Google collab free server and train it? And perhaps we can download it after we are done fine tuning it?
Vor 2 MonateAlbert Takaruza
absolutely amazing
Vor 2 MonateSurjya Padhi
Can we train these models with our data in laptops?
Vor 2 MonateAmbience20
What does it mean if Alpaca 7b outputs repetitive text?
Vor 2 MonateEvil Betty
Repository unavailable due to DMCA takedown. This repository is currently disabled due to a DMCA takedown notice. We have disabled public access to the repository. The notice has been publicly posted. If you are the repository owner, and you believe that your repository was disabled as a result of mistake or misidentification, you have the right to file a counter notice and have the repository reinstated. Our help articles provide more details on our DMCA takedown policy and how to file a counter notice. If you have any questions about the process or the risks in filing a counter notice, we suggest that you consult with a lawyer.
Vor 2 MonateVincent Koech
I am guessing this would be faster if it run on a GPU.
Vor 2 MonatePinaki Gupta
Did you try generating codes? I expect a cloud-hosted Alpaca than runs smoother.
Vor 2 MonateKunal Deshmukh
Can you train with custom data?
Vor MonatPhantomSol +1
Next step : create a Skyrim mod to generate NPC dialogs with Alpaca
Vor 2 Monateputzz67767 +1
Very good!!
Vor 2 MonateKristopher Driver
I wonder why it gave you two sentences for Pinocchio when you asked for one. That's strange because of how well other models adhere to the input's limitations.
Vor 2 MonateTechnolus
It might be because the model isnt good at counting. If you ask for a specific number of words, gpt models will also not be accurate.
Vor 2 MonateJared F
I would love to run one locally. I just want to write NSFW stories with an AI lol. I know you can turn the filter off in the playground, but I don’t want OpenAI training my shit haha.
Vor 2 MonateWhitney Design Labs +1
Thanks, Gary.
Vor 2 MonatePeter Matthews
Possibly a stupid question, but what does it mean that this is being run on a laptop? Does this mean it's offline/and to be used without access to the internet?
Vor 2 MonateGary Explains +1
Yes, it is offline, that is what makes it interesting!
Vor 2 MonateTimothy Suhr
Can these tools be used in Windows?
Vor 2 MonateK H Akbar Ch. +2
If it runs on CPU, it should be able to runs on my Android phone, right...? Might give a try later, very interesting there's Chat-GPT-like that is run-able on local.
Vor 2 Monatejeffwads +1
The problem with that is speed. Using the 30B model, and 12 cores running at 4Ghz, it is still very slow. The results are quite amazing though, so worth it.
Vor 2 MonateThe Beachdancer
How much memory does this need? Is it continuing to use the internet or is the generation of text completely self-contained? Is the knowledge base fixed at the time of original training?
Vor 2 MonateGary Explains +3
How much RAM you need depends on which model you use as the model is loaded into memory. My laptop has 16GB. This is standalone, it does NOT use the Internet. All models are fixed to the time of training, this isn't an AGI.
Vor 2 MonateS. Hessam M. Mehr
Really appreciate the pronunciation of azure. I'm always let down by folks calling it ai-zhur.
Vor 2 MonateGary Explains
Pronunciation can be a tricky thing especially with the differences between US English and UK English. I often get it wrong or people from different English speaking parts disagree with my pronunciation. In general I find it is best not to be "let down" by such things, ultimately it is a triviality. But in this case I am glad you approve.
Vor 2 MonateRod Fer +1
Excellent
Vor 2 MonateJordan Bodo
Will GitHub lift the llama ban? If not, there will be something else opensource soon.
Vor 2 MonateAnna Czgli
1:41 I see what you did there. One of the funniest story arcs 🌭🤣
Vor 2 MonateSims Studios LLC
Can you offer the llama DL download from somewhere else? It's been taken down...
Vor 2 Monatewelcome to the Real
So a friend stays at the door. 😀 Interesting a newer chat gpt model has a kind of aware sences of things?
Vor 2 MonateEpic Hardware +1
if I have more ram can I use large quantize let's say 8 bits or 16
Vor 2 MonateAiden Conn
For much better responses format the questions with ### Instruction: ### Response: This is how they formatted it during training and is why you got that weird response where it continued after its answer.
Vor 2 Monategr0ove
I'm sure you are right about the format during training, but I'm pretty sure the alpaca executable prompt adds these extra strings automatically behind the scenes as I've seen it glitch out and show the prompt and it was already in this format. It would be silly to have so much of this process automated, but then require tagging the strings like this, but stranger things have happened!
Vor 2 MonateGary Explains
If you look at the alpaca.cpp source code (specifically chat.cpp) then you will see that the prompts are added automatically.
Vor 2 MonateMein Deutschkurs mit Johannes
Gary, this isn‘t an alternative. It‘s not pretty good, and far from ChatGPT away. But it‘s on a good way.
Vor 2 MonateVivek Salam
Miss your speed test g channel, looking forward to seeing samsung galaxy s23 vs iphone 14 pro max speed test.
Vor 2 MonateCed
llama_model_load: invalid model file 'ggml-alpaca-7b-q4.bin' (bad magic) comes up what running the executable ./chat (mac terminal) Any clue how to fix this?
Vor MonatChristopher Smith
takedown. This repository is currently disabled due to a DMCA takedown notice. We have disabled public access to the repository. The notice has been publicly posted. If you are the repository owner, and you believe that your repository was disabled as a result of mistake or misidentification, you have the right to file a counter notice and have the repository reinstated. Our help articles provide more details on our DMCA takedown policy and how to file a counter notice. If you have any questions about the process or the risks in filing a counter notice, we suggest that you consult with a lawyer.
Vor 2 MonateRich Leyden
Can you get a clean, unbiased version to run locally? I asked chatgpt if California was still in the worst drought in 1000 years even though the rain this year is 3 sigma above normal. It said (paraphrasing only slightly) that social justice issue must be considered before that determination could be made.
Vor 2 MonateGary Explains
Interesting comment which provokes a few questions. 1. How are you defining the terms clean and biased? How did you think it has been made biased? How would you make one that is unbiased (according to your definition)? Are you sure that biased doesn't mean "not in agreement with your bias"? Finally you do understand that it can't actually have an opinion or be biased as it can't think and doesn't understand, it just predicts the next words to complete the prompt.
Vor 2 MonateRich Leyden
@Gary Explains According to chatgtp itself "Content moderation: The responses generated by the language model are filtered through a content moderation system that helps identify and remove any harmful or offensive content. Human review: The responses generated by the language model are also reviewed by human moderators to ensure that they are safe and appropriate." Harmful is in the eye of the beholder, I would like to make up my own mind.
Vor 2 MonateGary Explains
Well that is a completely different thing. If I ask ChatGPT how to make a bomb then the content filter will stop the bot from replying. In your case the reply was generated, so that doesn't apply. Are you suggesting that a better bot would have no content filter?
Vor 2 MonateAdventures OfLilaTheHusky
I wish I'd know how to use this.
Vor 2 MonateAllen's Trains
I have heard that they have banned Chat GPt4 in Italy, but I think they are overreacting! It seems to me that Chat GPT has got a long way to go before it will be anything more than an amusing toy. The weakness of Chat GPT in so far as I have experimented with it, is that it can only give answers based on the information that has been fed to it. Quite an interesting example is to ask it to calculate the weight, trajectory, and fuel required to go to the moon. Calculations suggest a moonshot is impossible, and that nobody has ever been to the moon. But the received information contradicts this and Chat GPT will tell you that men did in fact go to the moon. You can give Chat GPT a headache in the way Captain Kirk did in the Star Trek episode, "The Changeling". In this episode, an alien problem called, "Nomad" had acquired a dangerous level of power. Kirk convinces it to self-destruct, uttering the famous line, "Nomad, you are wrong! You are a mistake!" Thanks for uploading.
Vor 2 MonateGary Explains +1
It was banned over privacy regulations, nothing else. The Italian Data Protection Watchdog cited a data breach at OpenAI which allowed users to view the titles of conversations other users were having with the chatbot.
Vor 2 MonateP Visit +4
Trying to download llama-dl and getting this instead : Repository unavailable due to DMCA takedown. This repository is currently disabled due to a DMCA takedown notice. We have disabled public access to the repository. The notice has been publicly posted.
Vor 2 MonateThe Padded Cell
Alpaca isn't a rival. It's as much a rival as a Mini to a Ferrari. Not even in the same league
Vor Monatarxaaron
Add Mozilla to the AI / GPT list (coming soon...). The biggest difference will be their commitment to run ETHICAL and OPEN SOURCE Ai services.
Vor 2 MonateWhitney Design Labs +1
The llama model download link has been disabled for some reason: "Repository unavailable due to DMCA takedown." I would like to try to duplicate your results. I don't have a Mac and am a little fuzzy on the process. Do you plan to release a written tutorial or a bullet point of the steps involved?
Vor 2 MonateProductoJotaJota +1
Yeah, seems like Meta took it down, for allegedly using unauthorized forks and repository systems. Looks like the AI war is going strong.
Vor 2 MonateFreedomAirguns
Did anybody have success with the other models? I've tried the one shown in the video without any problem whatsoever but I had no luck with those that have been leaked. P.s. : it's running on a N5105(10 Watts TDP) 8GB LPDDR4 @ 3200 MT/s, with the database kept on a S-L-O-W flash memory (read/write max 190 MB/s), "flawlessly". AWESOME.
Vor 2 MonateXGreenThumb
Link to llama-dl is outdated, github took the project down.
Vor 2 MonateGlen Hassell
Stanford's Alpaca model, cost $600 to make, it was made with help from GPT, as a sidenote.
Vor 2 Monateagentstona
If your friend has come over at 5am in the morning , It's obvious you are wide awake and let him in through the door and are in your kitchen at the FRIDGE . Hence that is the point from which you Decide or ask the question what do you open first !!!!!!!!!!!! the answers about opening your eyes or the door first are INCORRECT or IRRELEAVENT to the context .......... Because the context clearly is that YOU ARE at a point where you are having trouble in deciding on what FOOD to make thus which items to open and serve first !!!!!!!!!!!!
Vor MonatGary Explains
No, I disagree. First the friend can knock on the door and I can know it is my friend from their voice on the intercom or some kind of smart door system. I can still be in bed and even with my eyes closed. Second, I already know what is in my fridge without needing to open my eyes or get out of bed.
Vor MonatGary Explains
Besides, you seem to have missed the point, that such a question is the kind of things that 7 year olds say in the school year (playground). It is meant to be a trick/misleading.
Vor MonatBilly Kotsos
great stuff
Vor 2 MonateDivine MisAdVentures +1
I have to say that's a fail. Perfect thumbnail - it's "Chatgpt for Alpacas". . . . But super appreciate the video!
Vor 11 TageChris
Maybe you can help me... when I run Alpaca 7B it does a pretty good job, not as good as ChatGPT but reasonable. When I try to run Alpaca 13B it gives me utter nonsense. It just doesn't seem to work at all. I asked it the difference between a dog and a cat and it told me that cats always have wounds on their rear legs... Shouldn't 13B be better? I've tried dozens of tests and it always worse. I installed it through dalai the same way I installed 7B. I have an RTX-4080 w/ 16GB of VRAM and 32GB of system memory.
Vor 2 MonateAmateur Wizard +1
Looks like ARM is getting greedy if you've seen the latest news on licensing.
Vor 2 MonateYesTechGuy
10:06 codes start here... BTW Repository remove _Repository unavailable due to DMCA takedown._
Vor 2 MonateValerie Hammond
The llama model repo is gone: Repository unavailable due to DMCA takedown.
Vor MonatgViking85
How do i train it on a new model? I want to "teach" it unreal engine. Can i force feed it documentation and websites to scrap data from?
Vor 2 MonateEdward
The Llama model has been taken down via DMCA. What the hell. Why would a model be taken down??
Vor 2 MonateEduard mart
Is it possible to train it?
Vor 2 MonateTheShorterboy
can I get one with a non censored/broken model
Vor 2 Monate