OpusClip turns long videos into high-quality viral clips, and publishes them to all social platforms in one click. We help 10M+ creators create and gr
Based on the limited social mention available, users express concerns about Opus Clip's value proposition, with one reviewer finding the $29/month pricing too high for the features provided. The user tested the service and concluded they "can't recommend it," suggesting the tool doesn't deliver sufficient value at its current price point. Without additional reviews, it's difficult to assess broader user sentiment, but the pricing appears to be a key pain point for potential customers evaluating AI-powered video editing services.
Mentions (30d)
0
Reviews
0
Platforms
3
Sentiment
0%
0 positive
Based on the limited social mention available, users express concerns about Opus Clip's value proposition, with one reviewer finding the $29/month pricing too high for the features provided. The user tested the service and concluded they "can't recommend it," suggesting the tool doesn't deliver sufficient value at its current price point. Without additional reviews, it's difficult to assess broader user sentiment, but the pricing appears to be a key pain point for potential customers evaluating AI-powered video editing services.
Features
Industry
information technology & services
Employees
140
Funding Stage
Series B
Total Funding
$50.0M
Day 1 of testing paid AI services so you don’t have to. I checked out Opus Clip today. Can’t say I recommend it. $29 a month seems like a lot for what you actually get for that price point and I’d rat
Day 1 of testing paid AI services so you don’t have to. I checked out Opus Clip today. Can’t say I recommend it. $29 a month seems like a lot for what you actually get for that price point and I’d rather allocate that to $20 on chat gpt pro and probably Canva pro at $15. #ai #opusclip #aitools #aiproducts #aiservices #chatgpt #opuspro #review #techreview
View originalPricing found: $2,700, $15, $0, $0, $15/mo
ARC AGI 3 sucks
ARC-AGI-3 is a deeply rigged benchmark and the marketing around it is insanely misleading - Human baseline is not “human,” it’s near-elite human They normalize to the second-best first-run human by action count, not average or median human. So “humans score 100%” is PR wording, not a normal-human reference. - The scoring is asymmetrically anti-AI If AI is slower than the human baseline, it gets punished with a squared ratio. If AI is faster, the gain is clamped away at 1.0. So AI downside counts hard, AI upside gets discarded. - Big AI wins are erased, losses are amplified If AI crushes humans on 8 tasks and is worse on 2, the 8 wins can get flattened while the 2 losses drag the total down hard. That makes it a terrible measure of overall capability. - Official eval refuses harnesses even when harnesses massively improve performance Their own example shows Opus 4.6 going from 0.0% to 97.1% on one environment with a harness. If a wrapper can move performance from zero to near saturation, then the benchmark is hugely sensitive to interface/policy setup, not just “intelligence.” - Humans get vision, AI gets symbolic sludge Humans see an actual game. AI agents were apparently given only a JSON blob. On a visual task, that is a massive handicap. Low score under that setup proves bad representation/interface as much as anything else. - Humans were given a starting hint The screenshot shows humans got a popup telling them the available controls and explicitly saying there are controls, rules, and a goal to discover. That is already scaffolding. So the whole “no handholding” purity story falls apart immediately. - Human and AI conditions are not comparable Humans got visual presentation, control hints, and a natural interaction loop. AI got a serialized abstraction with no goal stated. That is not a fair human-vs-AI comparison. It is a modality handicap. - “Humans score 100%, AI <1%” is misleading marketing That slogan makes it sound like average humans get 100 and AI is nowhere close. In reality, 100 is tied to near-top human efficiency under a custom asymmetric metric. That is not the same claim at all. - Not publishing average human score is suspicious as hell If you’re going to sell the benchmark through human comparison, where is average human? Median human? Top 10%? Without those, “human = 100%” is just spin. - Testing ~500 humans makes the baseline more extreme, not less If you sample hundreds of people and then anchor to the second-best performer, you are using a top-tail human reference while avoiding the phrase “best human” for optics. - The benchmark confounds reasoning with perception and interface design If score changes massively depending on whether the model gets a decent harness/vision setup, then the benchmark is not isolating general intelligence. It is mixing reasoning with input representation and interaction policy. - The clamp hides possible superhuman performance If the model is already above human on some tasks, the metric won’t show it. It just clips to 1. So the benchmark can hide that AI may already beat humans in multiple categories. - “Unbeaten benchmark” can be maintained by score design, not task difficulty If public tasks are already being solved and harnesses can push score near ceiling, then the remaining “hardness” is increasingly coming from eval policy and metric choices, not unsolved cognition. - The benchmark is basically measuring “distance from our preferred notion of human-like efficiency” That can be a niche research question. But it is absolutely not the same thing as a fair AGI benchmark or a clean statement about whether AI is generally smarter than humans. Bottom line ARC-AGI-3 is not a neutral intelligence benchmark. It is a benchmark-shaped object designed to preserve a dramatic human-AI gap by using an elite human baseline, asymmetric math, anti-harness policy, and non-comparable human vs AI interfaces submitted by /u/the_shadow007 [link] [comments]
View originalDay 1 of testing paid AI services so you don’t have to. I checked out Opus Clip today. Can’t say I recommend it. $29 a month seems like a lot for what you actually get for that price point and I’d rat
Day 1 of testing paid AI services so you don’t have to. I checked out Opus Clip today. Can’t say I recommend it. $29 a month seems like a lot for what you actually get for that price point and I’d rather allocate that to $20 on chat gpt pro and probably Canva pro at $15. #ai #opusclip #aitools #aiproducts #aiservices #chatgpt #opuspro #review #techreview
View originalYes, Opus Clip offers a free tier. Pricing found: $2,700, $15, $0, $0, $15/mo
Key features include: ClipAnything, ReframeAnything, Brand templates, Team workspace, Workflow integration, How does OpusClip work?, What types of videos can I upload?, Which languages are supported?.