OpenAI is acquiring Neptune to deepen visibility into model behavior and strengthen the tools researchers use to track experiments and monitor trainin
Based on the limited social mentions available, Neptune appears to be positioned as an ML experiment tracking tool in the machine learning community. One notable mention indicates that some users have moved away from Neptune to alternative solutions like GoodSeed, suggesting there may be room for improvement in user experience or functionality. The lack of detailed user reviews makes it difficult to assess specific strengths or complaints about Neptune's features, pricing, or overall performance. The multiple YouTube mentions suggest Neptune has some visibility in the ML tools space, but without more substantive user feedback, it's challenging to determine the overall user sentiment or reputation.
Mentions (30d)
0
Reviews
0
Platforms
2
Sentiment
0%
0 positive
Based on the limited social mentions available, Neptune appears to be positioned as an ML experiment tracking tool in the machine learning community. One notable mention indicates that some users have moved away from Neptune to alternative solutions like GoodSeed, suggesting there may be room for improvement in user experience or functionality. The lack of detailed user reviews makes it difficult to assess specific strengths or complaints about Neptune's features, pricing, or overall performance. The multiple YouTube mentions suggest Neptune has some visibility in the ML tools space, but without more substantive user feedback, it's challenging to determine the overall user sentiment or reputation.
Industry
information technology & services
Employees
81
Funding Stage
Merger / Acquisition
Total Funding
$12.7M
[P] We made GoodSeed, a pleasant ML experiment tracker
# GoodSeed v0.3.0 🎉 I and my friend are pleased to announce **GoodSeed** \- a ML experiment tracker which we are now using as a replacement for Neptune. # Key Features * **Simple and fast**: Beautiful, clean UI * **Metric plots:** Zoom-based downsampling, smoothing, relative time x axis, fullscreen mode, ... * **Monitoring plots**: GPU/CPU usage (both NVIDIA and AMD), memory consumption, GPU power usage * **Stdout/Stderr monitoring**: View your program's output online. * **Structured Configs**: View your hyperparams and other configs in a filesystem-like interactive table. * **Git Status Logging**: Compare the state of your git repo across experiments. * **Remote Server** (beta version): Back your experiments to a remote server and view them online. For now, we only support metrics, strings, and configs (no files). * **Neptune Proxy**: View your Neptune runs through the GoodSeed web app. You can also migrate your runs to GoodSeed (either to local storage or to the remote server). # Try it * Web: [https://goodseed.ai/](https://goodseed.ai/) * Click on *Demo* to see the app with an example project. * *Connect to Neptune* to see your Neptune runs in GoodSeed. * `pip install goodseed` to log your experiments. * *Log In* to create an account and sync your runs with a remote server (we only have limited seats now because the server is quite expensive - we might set up some form of subscription later). * Repo (MIT): [https://github.com/kripner/goodseed](https://github.com/kripner/goodseed) * Migration guide from Neptune: [https://docs.neptune.ai/transition\_hub/migration/to\_goodseed](https://docs.neptune.ai/transition_hub/migration/to_goodseed)
View originalPricing found: $122
[P] We made GoodSeed, a pleasant ML experiment tracker
# GoodSeed v0.3.0 🎉 I and my friend are pleased to announce **GoodSeed** \- a ML experiment tracker which we are now using as a replacement for Neptune. # Key Features * **Simple and fast**: Beautiful, clean UI * **Metric plots:** Zoom-based downsampling, smoothing, relative time x axis, fullscreen mode, ... * **Monitoring plots**: GPU/CPU usage (both NVIDIA and AMD), memory consumption, GPU power usage * **Stdout/Stderr monitoring**: View your program's output online. * **Structured Configs**: View your hyperparams and other configs in a filesystem-like interactive table. * **Git Status Logging**: Compare the state of your git repo across experiments. * **Remote Server** (beta version): Back your experiments to a remote server and view them online. For now, we only support metrics, strings, and configs (no files). * **Neptune Proxy**: View your Neptune runs through the GoodSeed web app. You can also migrate your runs to GoodSeed (either to local storage or to the remote server). # Try it * Web: [https://goodseed.ai/](https://goodseed.ai/) * Click on *Demo* to see the app with an example project. * *Connect to Neptune* to see your Neptune runs in GoodSeed. * `pip install goodseed` to log your experiments. * *Log In* to create an account and sync your runs with a remote server (we only have limited seats now because the server is quite expensive - we might set up some form of subscription later). * Repo (MIT): [https://github.com/kripner/goodseed](https://github.com/kripner/goodseed) * Migration guide from Neptune: [https://docs.neptune.ai/transition\_hub/migration/to\_goodseed](https://docs.neptune.ai/transition_hub/migration/to_goodseed)
View original[R] AudioMuse-AI-DCLAP - LAION CLAP distilled for text to music
Hi All, I just want to share that I distilled the [LAION CLAP](https://github.com/LAION-AI/CLAP) model specialized for music and I called AudioMuse-AI-DCLAP. It enable to search song by text by projecting both Text and Song on the same 512 embbeding dimension space. You can find the .onnx model here free and opensource on github: \* [https://github.com/NeptuneHub/AudioMuse-AI-DCLAP](https://github.com/NeptuneHub/AudioMuse-AI-DCLAP) It will also soon (actually in devel) be integrated in AudioMuse-AI, enabling user to automatically create playlist by searching with text. This functionality already exist using the teacher and the goals of this distilled model is to have it faster: * [https://github.com/NeptuneHub/AudioMuse-AI](https://github.com/NeptuneHub/AudioMuse-AI) The text tower is still the same because even if it's bigger in size is already very fast to be executed due to the text input. I distilled the audio tower using this pretrained model as a teacher: * music\_audioset\_epoch\_15\_esc\_90.14 The result is that you go from 295mb and around 80m param, to 23mb and around 7m param. I still need to do better check on speed but it is at least a 2-3x faster. On this first distillation result I was able to reach a 0.884 of validation cosine between the teacher and the student and below you can find more test related to MIR metrics. For distillation I did: \- a first student model, starting from EfficentAt ms10as pretrained model of around 5m parameter; \- when I reached the plateau around 0.85 cosine similarity (after different parameter test) I froze the model and added an additional smaller student. The edgenext xxsmal of around 1.4m parameter. This below Music Information Retrieval (MIR) metrics are calculated against a 100 songs collection, I'm actually try more realistic case against my entire library. Same query is off course very tricky (and the result off course highlight this), I want to check if over bigger collection they still return useful result. The query used are only an example, you can still use all the possible combination that you use in LAION CLAP because the text tower is unchanged. If you have any question, suggestions, idea, please let me know. If you like it you can support me by putting a start on my github repositories. **EDIT:** Just did some test on a Raspberry PI 5, and the performance of DCLAP are 5-6x faster than the LAION CLAP. This bring the possibility to analyze song in a decent amount of time even on a low performance homelab (you have to think that user analyze collection of thousand of song, and an improvement like this menas having it analyzed in less than one week instead of a months). Query Teacher Student Delta ────────────────────────────── ───────── ───────── ───────── Calm Piano song +0.0191 +0.0226 +0.0035 Energetic POP song +0.2005 +0.2268 +0.0263 Love Rock Song +0.2694 +0.3298 +0.0604 Happy Pop song +0.3236 +0.3664 +0.0428 POP song with Female vocalist +0.2663 +0.3091 +0.0428 Instrumental song +0.1253 +0.1543 +0.0290 Female Vocalist +0.1694 +0.1984 +0.0291 Male Vocalist +0.1238 +0.1545 +0.0306 Ukulele POP song +0.1190 +0.1486 +0.0296 Jazz Sax song +0.0980 +0.1229 +0.0249 Distorted Electric Guitar -0.1099 -0.1059 +0.0039 Drum and Bass beat +0.0878 +0.1213 +0.0335 Heavy Metal song +0.0977 +0.1117 +0.0140 Ambient song +0.1594 +0.2066 +0.0471 ────────────────────────────── ───────── ───────── ───────── OVERALL MEAN +0.1392 +0.1691 +0.0298 MIR RANKING METRICS: R@1, R@5, mAP@10 (teacher top-5 as relevance) Query R@1 R@5 mAP@10 Overlap10 Ordered10 MeanShift ------------------------------ ------- ------------ -------- --------- --------- -------- Calm Piano song 0/1 4/5 (80.0%) 0.967 7/10 2/10 2.20 Energetic POP song 1/1 2/5 (40.0%) 0.508 5/10 2/10 5.40 Love Rock Song 0/1 3/5 (60.0%) 0.730 8/10 1/10 3.10 Happy Pop song 0/1 2/5 (40.0%) 0.408 4/10 0/10 6.20 POP song with Female vocalist 0/1 2/5 (40.0%) 0.489 7/10 0/10 4.90 Instrumental song 1/1 3/5 (60.0%) 0.858 8/10 3/10 3.00 Female Vocalist 0/1 2/5 (40.0%) 0.408 5/10 0/10 9.80 Male Vocalist
View originalPricing found: $122