Turn your ideas into videos with hyperreal motion and sound.
Based on the limited mentions available, users have mixed feelings about Sora's value proposition. Some users appreciate having access to Sora as part of ChatGPT Pro's $200/month tier, but many question whether the high price point is justified for their specific use cases. There are concerns about Sora's API costs being expensive, and OpenAI's decision to shut down the tool after just six months has raised suspicions among users about the platform's reliability and long-term commitment. Overall, while users seem interested in Sora's video generation capabilities, the pricing and sudden discontinuation have created skepticism about its worth and sustainability.
Mentions (30d)
1
Reviews
0
Platforms
4
Sentiment
0%
0 positive
Based on the limited mentions available, users have mixed feelings about Sora's value proposition. Some users appreciate having access to Sora as part of ChatGPT Pro's $200/month tier, but many question whether the high price point is justified for their specific use cases. There are concerns about Sora's API costs being expensive, and OpenAI's decision to shut down the tool after just six months has raised suspicions among users about the platform's reliability and long-term commitment. Overall, while users seem interested in Sora's video generation capabilities, the pricing and sudden discontinuation have created skepticism about its worth and sustainability.
Industry
research
Employees
7,500
Funding Stage
Venture (Round not Specified)
Total Funding
$281.9B
I just got ChatGPT Pro - is it really worth $200 a month 🤔 Will I be sad when I downgrade back to $20 a month Plus? No and no. But let's dive into why and how Pro is different. There are basically fo
I just got ChatGPT Pro - is it really worth $200 a month 🤔 Will I be sad when I downgrade back to $20 a month Plus? No and no. But let's dive into why and how Pro is different. There are basically four headline features of ChatGPT Pro: 1. Sora, the video generator. 2. Operator, the autonomous agent that uses the internet for you. 3. Deep Research, which is an agent-like researcher that goes off and does very in-depth research for you and returns very in-depth, long reports. 4. o1 Pro Mode, which is their most advanced reasoning model, which is great for very specific things like scientific research, financial modeling, medical diagnosis, science and maths. And coding as well, which are very specific applications. The other thing that Pro has, people don't talk so much about, is a much longer context window. If you're paying $20 a month for ChatGPT Plus, your context window, or conversation length, is only about 25,000 words. Which might sound like a lot, but if you're going back and forth with it, exchanging information, feeding back, inputting documents, you can eat up that 25,000 words quite quickly. Particularly if you start coding, that melts away. ChatGPT Pro has a context window of about 100k words and I think I’ve felt the benefit of that. Compare that with the fact that Claude and Gemini have absolutely massive context windows. Google Gemini Pro has a context window of 1.5 million words. So that alone isn’t worth $200 a month. What about those key features? So those four headline features is what really attracts people. But I'll be waiting for them to mature and make their way down to the Plus plan! #chatgpt #aitools #ai #artificialintelligence #learnontikok #tech #technology #openai #GPT #aiagent
View originalSora account
trying to get my account is back so I can transfer all my videos to my device submitted by /u/Kind_Function_9628 [link] [comments]
View originalSora account
https://sora.chatgpt.com/invite?code=BRT78X 👀 submitted by /u/Kind_Function_9628 [link] [comments]
View originalGoogle's Veo 3.1 Lite Cuts API Costs in Half as OpenAI's Sora Exits the Market
Google just cut Veo 3.1 API prices across the board today (April 7). Lite tier is now $0.05/sec — less than half the cost of Fast. Timing is interesting given OpenAI killed Sora last week after burning ~$15M/day with only $2.1M total revenue. Google now basically owns the AI video API space with no real competitor left standing. submitted by /u/Least-Analysis-3910 [link] [comments]
View originalPossible new Sora model?
Was on this AI arena website, I know the new gpt-image-2 was found on something similar. I was on the video arena and (after a few tries you will stumble on it too) have found a video model that surpasses every single one right now by far. Thought it might be a new Veo or Sora model. Check for yourself: https://artificialanalysis.ai/video/arena It’s called ‘happyhorse-1.0’ submitted by /u/sellatine [link] [comments]
View originalwhat is happen to . . .
Would it be reckless to ask OpenAI about the discrepancy between this system message and reality? what is ? submitted by /u/st4rdus2 [link] [comments]
View originalWow, that escalated quickly
Made on Sora 2 submitted by /u/memerwala_londa [link] [comments]
View originalGood video generator after the disappearance of Sora 2? (Not looking for a crappy answer from an entitled ****)
Help, we all know Sora is down so can you tell me where I can find a decent one thats just as free? submitted by /u/frederick_ormasgum7 [link] [comments]
View originalfinally took AI video seriously after dismissing it for two years and have some thoughts
Hey everyone! I do real estate videography in LA, mostly higher end residential stuff in areas like Los Feliz and Silver Lake, and for the past year or so I've been slowly incorporating AI video into my pre-production process in a way that has genuinely changed how I work with clients. I wanted to share what that actually looked like in practice because most of what I see online about AI video is either people hyping it up way too much or dismissing it entirely, and the reality for working videographers is somewhere messier and more interesting than either of those takes. How it started About a year ago I had a client, a real estate agent who works with a lot of out of state buyers, ask me if I could show her roughly what a property walkthrough would look like before we committed to a shoot day. She wanted to send something to her client overseas to get buy-in before flying them out. I didn't really have a good answer for her at the time. I sent over some reference videos from past projects and she was polite about it but I could tell it wasn't what she was asking for. That stuck with me. I started looking into whether AI video tools could fill that gap, not as a replacement for the actual shoot but as a way to give clients a rough visual direction early in the process. What I found was that the tools varied a lot more than I expected in ways that took me a while to understand. What I actually learned from using them The first thing that surprised me was how differently each model handles interior spaces. Lighting consistency from room to room, the way natural light comes through windows, how furniture reads on screen. These things matter a lot for real estate work and some models handled them way better than others. Veo ended up being the most reliable for that kind of controlled interior work, the output was clean enough that two clients I showed early concepts to didn't realize it wasn't footage I had already shot. For exterior shots and neighborhood context, wider establishing stuff, I got better results from Sora even though getting access was more annoying than it should be. And for anything more stylized, like a concept reel to help a client visualize a renovation before it happened, Wan turned out to be more useful than I expected going in. The bigger problem I ran into was that managing all of these tools separately was eating up way more time than I anticipated. Different platforms, different credit systems, files scattered all over the place. I was spending a chunk of every morning just getting organized before I could do any actual work. Someone in a Facebook group for videographers mentioned Prism as a way to manage multiple models from one place and that ended up solving most of that problem for me. There's also a pretty good discussion on r/videography from a few months back about AI pre-viz workflows that's worth reading if you want more perspectives on this, and this breakdown on YouTube goes into how other commercial shooters are thinking about integrating these tools without it replacing their core work. What my process looks like now I now offer a concept preview as part of my standard package for any listing over a certain price point. It takes me a couple of hours to put together something rough enough to be useful and clients respond really well to it. The agent I mentioned at the beginning has referred me to three other agents in her office specifically because of this, she brings it up every time. The actual shoot still matters just as much as it always did. The AI stuff is just a way to get everyone on the same page before we get there so we're not making decisions on the day that should have been made weeks earlier. If anyone has questions about how this works in practice for real estate specifically I'm happy to go into more detail. submitted by /u/SpecificFee6350 [link] [comments]
View originalAltman on shutting down Sora: 'I did not expect 3 or 6 months ago to be at this point we're at now; where something very big and important is about to happen again with this next generation of models and the agents they can power.'
https://youtu.be/mJSnn0GZmls ‘We have a few times in our history realized something really important is working, or about to work so well, that we have to stop a bunch of other projects. In fact, this was the original thing that happened with GPT3. We had a whole portfolio of bets at the time. A lot of them were working well. We shut down many projects that were working well, like robotics which we mentioned, so that we could concentrate our compute, our researchers, our effort into this thing that we said "okay there's a very important thing happening." I did not expect 3 or 6 months ago to be at this point we're at now; where something very big and important is about to happen again with this next generation of models and the agents they can power.' He goes on to imply there may be a possible future relationship with Disney, then finishes up with: 'we need to concentrate our compute and our product capacity into these next generation of automated researchers and companies.' submitted by /u/Tolopono [link] [comments]
View originalOpenAI is throwing away Sora’s real value
If the issue with Sora is compute cost, then shutting down the entire platform — including Sora 1 — doesn’t make much sense. Sora 1’s image generation was one of the few systems that actually delivered contextually coherent results. For fields like historical research and documentary content, that level of understanding is rare and extremely valuable. If Sora 2 (video) is too resource-intensive, fine — scale that down or remove it. But Sora 1 could have been preserved as a high-quality image generation tool. It already had a strong foundation and a clear use case. From a user perspective, it feels like a mistake to discard something that was not only a first mover, but also genuinely ahead in terms of output quality and contextual accuracy. submitted by /u/flashback80 [link] [comments]
View originalWell... only lasted 5months
submitted by /u/GREATD4NNY [link] [comments]
View originalStanford CS 25 Transformers Course (OPEN TO ALL | Starts Tomorrow)
Tl;dr: One of Stanford's hottest AI seminar courses. We open the course to the public. Lectures start tomorrow (Thursdays), 4:30-5:50pm PDT, at Skilling Auditorium and Zoom. Talks will be recorded. Course website: https://web.stanford.edu/class/cs25/. Interested in Transformers, the deep learning model that has taken the world by storm? Want to have intimate discussions with researchers? If so, this course is for you! Each week, we invite folks at the forefront of Transformers research to discuss the latest breakthroughs, from LLM architectures like GPT and Gemini to creative use cases in generating art (e.g. DALL-E and Sora), biology and neuroscience applications, robotics, and more! CS25 has become one of Stanford's hottest AI courses. We invite the coolest speakers such as Andrej Karpathy, Geoffrey Hinton, Jim Fan, Ashish Vaswani, and folks from OpenAI, Anthropic, Google, NVIDIA, etc. Our class has a global audience, and millions of total views on YouTube. Our class with Andrej Karpathy was the second most popular YouTube video uploaded by Stanford in 2023! Livestreaming and auditing (in-person or Zoom) are available to all! And join our 6000+ member Discord server (link on website). Thanks to Modal, AGI House, and MongoDB for sponsoring this iteration of the course. submitted by /u/MLPhDStudent [link] [comments]
View originalStanford CS 25 Transformers Course (OPEN TO ALL | Starts Tomorrow)
Tl;dr: One of Stanford's hottest AI seminar courses. We open the course to the public. Lectures start tomorrow (Thursdays), 4:30-5:50pm PDT, at Skilling Auditorium and Zoom. Talks will be recorded. Course website: https://web.stanford.edu/class/cs25/. Interested in Transformers, the deep learning model that has taken the world by storm? Want to have intimate discussions with researchers? If so, this course is for you! Each week, we invite folks at the forefront of Transformers research to discuss the latest breakthroughs, from LLM architectures like GPT and Gemini to creative use cases in generating art (e.g. DALL-E and Sora), biology and neuroscience applications, robotics, and more! CS25 has become one of Stanford's hottest AI courses. We invite the coolest speakers such as Andrej Karpathy, Geoffrey Hinton, Jim Fan, Ashish Vaswani, and folks from OpenAI, Anthropic, Google, NVIDIA, etc. Our class has a global audience, and millions of total views on YouTube. Our class with Andrej Karpathy was the second most popular YouTube video uploaded by Stanford in 2023! Livestreaming and auditing (in-person or Zoom) are available to all! And join our 6000+ member Discord server (link on website). Thanks to Modal, AGI House, and MongoDB for sponsoring this iteration of the course. submitted by /u/MLPhDStudent [link] [comments]
View originalInside OpenAI's decision to abandon Sora AI video app
submitted by /u/LinkedInNews [link] [comments]
View originalUnpopular Opinion: I’m glad Sora is gone
As a Creative, I’ve attempted to use it for both Professional and Hobbyist purposes. It fails both. Higgsfield even Veo sometimes is better. Though both are unreliable at scale. At least GPT Image is actually useful. AI faces, like all industries, the classic economic problem of allocation. I’m hoping now with that (very resource intensive) platform gone, OpenAI: A) Allocates more compute for text models such as 5.4, 5.5/Spud B) Allows using 5.4 Pro (with limited queries) for Plus C) Increases context window and accuracy, with a boost in memory D) Build better integration like Claude has across multiple professional (Office) and personal apps (Messages) Overall, I’m wondering how OpenAI will utilize the new breathing room. submitted by /u/Goofball-John-McGee [link] [comments]
View originalSora uses a tiered pricing model. Visit their website for current pricing details.
Based on user reviews and social mentions, the most common pain points are: API costs, raised, openai.
Based on 33 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.
The Rundown AI
Newsletter at The Rundown AI
2 mentions