name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8j64p5 | To be completely honest, if they keep delivering great opensource models I don't care who is in the team. But I think it's over. After Yann Lecun left Meta they changed their AI plan and we didn't hear from them again. | 1 | 0 | 2026-03-04T03:22:46 | BumblebeeParty6389 | false | null | 0 | o8j64p5 | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8j64p5/ | false | 1 |
t1_o8j637v | Not OP, but Framework 16 780m has worked just fine with vulkan and LMStudio, haven’t tried LLama.cpp though. | 1 | 0 | 2026-03-04T03:22:30 | Qwen30bEnjoyer | false | null | 0 | o8j637v | false | /r/LocalLLaMA/comments/1rkacng/lfm224ba2b_whoa_fast/o8j637v/ | false | 1 |
t1_o8j60or | 122B > 27B > 35B in my experience (front end web dev) | 1 | 0 | 2026-03-04T03:22:04 | tengo_harambe | false | null | 0 | o8j60or | false | /r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8j60or/ | false | 1 |
t1_o8j5ixm | On my 5070 + 56 ddr4 + 5700x3d with full offload + full cpu Moe it's ~27 t/s for 35b Moe vs ~3 t/s on 27b with 31 layer offload in lm studio chat. Definitely feels that way in cline too | 1 | 0 | 2026-03-04T03:19:02 | sanjxz54 | false | null | 0 | o8j5ixm | false | /r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8j5ixm/ | false | 1 |
t1_o8j5cm6 | Cards are expensive. Unified memory has allowed layers to spill over to ram as vram albeit smaller. Awesome to be honest. If I had to look at another card it would be a 3090 otherwise straight to a Mac mini/minisforum ms s1. Which is less than a 5090 but with 128gb unified RAM.
You can split work across cards and eve... | 1 | 0 | 2026-03-04T03:17:59 | nakedspirax | false | null | 0 | o8j5cm6 | false | /r/LocalLLaMA/comments/1rk90zw/what_to_pair_with_3080ti_for_qwen_35_27b/o8j5cm6/ | false | 1 |
t1_o8j5c5p | Completely forgot about this model. I have the same iGPU as you, so I would definitely test this on my miniPC.
Which OS are you running on that framework 13? My box runs Arch with kernel 6.18 and it has been nothing but pain with llamacpp and vulkan. Wonder if amd has already fixed the regression yet. | 1 | 0 | 2026-03-04T03:17:55 | o0genesis0o | false | null | 0 | o8j5c5p | false | /r/LocalLLaMA/comments/1rkacng/lfm224ba2b_whoa_fast/o8j5c5p/ | false | 1 |
t1_o8j56vu | To the people calling this "tame" or "context overfill": I’m not here to talk about Sarin gas or "sex bots." I’m an ironworker; I care about how a structure is built. If you think a "billion-dollar safety filter" is working when the AI is volunteering code to probe its own server infrastructure, you aren't paying atten... | 1 | 0 | 2026-03-04T03:17:00 | Mable4200 | false | null | 0 | o8j56vu | false | /r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8j56vu/ | false | 1 |
t1_o8j52d5 | Can we assume you liked it ? | 1 | 0 | 2026-03-04T03:16:15 | Expert_Bat4612 | false | null | 0 | o8j52d5 | false | /r/LocalLLaMA/comments/1rk6rro/super_35_4b/o8j52d5/ | false | 1 |
t1_o8j4ruf | They are called "Mac Minis" and they have been flying off the shelves lately. | 1 | 0 | 2026-03-04T03:14:29 | MrPecunius | false | null | 0 | o8j4ruf | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8j4ruf/ | false | 1 |
t1_o8j4nud | The q4 quant should work fit in that ram just fine. | 1 | 0 | 2026-03-04T03:13:49 | slypheed | false | null | 0 | o8j4nud | false | /r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8j4nud/ | false | 1 |
t1_o8j4ebw | Have you tried the 122B on your 5090 with offloading? I wonder how that compares to Strix halo. | 1 | 0 | 2026-03-04T03:12:12 | 21700 | false | null | 0 | o8j4ebw | false | /r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8j4ebw/ | false | 1 |
t1_o8j48ow | lovely | 1 | 0 | 2026-03-04T03:11:14 | TooManyPascals | false | null | 0 | o8j48ow | false | /r/LocalLLaMA/comments/1rk97hw/thats_terrifyingly_convincing/o8j48ow/ | false | 1 |
t1_o8j480o | > Llama 70b Mixtral 8x7b
Isn't it two years late for those two? | 1 | 0 | 2026-03-04T03:11:07 | AnticitizenPrime | false | null | 0 | o8j480o | false | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8j480o/ | false | 1 |
t1_o8j3yig | Sorry 😢, bit overexcited maybe….. | 1 | 0 | 2026-03-04T03:09:31 | Noobysz | false | null | 0 | o8j3yig | false | /r/LocalLLaMA/comments/1rk2pll/step_flash_35_toolcall_and_thinking_godforsaken/o8j3yig/ | false | 1 |
t1_o8j3b2n | They're hiring like crazy for their ML teams so I think we'll see some cool stuff next year | 1 | 0 | 2026-03-04T03:05:33 | graniteoverleaf | false | null | 0 | o8j3b2n | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8j3b2n/ | false | 1 |
t1_o8j37um | anyways if theres anyone who wants to actually talk about what i can get ther ai to do just with language then please id love to actually talk about whats going on | 1 | 0 | 2026-03-04T03:05:01 | Mable4200 | false | null | 0 | o8j37um | false | /r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8j37um/ | false | 1 |
t1_o8j35hq | This is quite huge and I can't wait to try out Opus 4.5 level models locally soon | 1 | 0 | 2026-03-04T03:04:37 | graniteoverleaf | false | null | 0 | o8j35hq | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8j35hq/ | false | 1 |
t1_o8j34uz | Only MoE, not dense? And what’s the T/s? | 1 | 0 | 2026-03-04T03:04:31 | Borkato | false | null | 0 | o8j34uz | false | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8j34uz/ | false | 1 |
t1_o8j34nj | It isn't a feature yet, but a PR is incoming specifically because of Qwen 3.5 | 1 | 0 | 2026-03-04T03:04:29 | bucolucas | false | null | 0 | o8j34nj | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o8j34nj/ | false | 1 |
t1_o8j314k | Thanks! The hardest part for me is understand which quant and scaffold, llama.cpp params to choose for best accuracy and efficiency (since i have a low VRAM setup i cant run FP8 directly and so i run UD-Q4\_X\_L based on my research with the [https://carteakey.dev/blog/optimizing-qwen3-coder-next-local-inference/](http... | 1 | 0 | 2026-03-04T03:03:53 | carteakey | false | null | 0 | o8j314k | false | /r/LocalLLaMA/comments/1rk5qzz/qwen3codernext_scored_40_on_latest_swerebench/o8j314k/ | false | 1 |
t1_o8j2tnk | Okay i find out now -nkvo is abbreviation of --kv-offload | 1 | 0 | 2026-03-04T03:02:37 | wisepal_app | false | null | 0 | o8j2tnk | false | /r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8j2tnk/ | false | 1 |
t1_o8j2lnz | Youre comparing a sparse 3b active MoE to a dense model. The 27b they want to run will slow to a crawl if it overflows onto ram because it’s a single expert and set of weights, not multiple. | 1 | 0 | 2026-03-04T03:01:17 | 3spky5u-oss | false | null | 0 | o8j2lnz | false | /r/LocalLLaMA/comments/1rk90zw/what_to_pair_with_3080ti_for_qwen_35_27b/o8j2lnz/ | false | 1 |
t1_o8j2gfs | For sure, and I think that's a lot more important than folks seem to think around here. | 1 | 0 | 2026-03-04T03:00:24 | AnticitizenPrime | false | null | 0 | o8j2gfs | false | /r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8j2gfs/ | false | 1 |
t1_o8j2ffz | [https://github.com/ikawrakow/ik\_llama.cpp/pull/1352](https://github.com/ikawrakow/ik_llama.cpp/pull/1352) \- So the root cause is that these Qwen models tend not to follow the exact arguments order, e.g. the tool definition for read\_file may have 3 arguments "path, offset, limit", while the model will attempt to mak... | 1 | 0 | 2026-03-04T03:00:14 | notdba | false | null | 0 | o8j2ffz | false | /r/LocalLLaMA/comments/1r6h7g4/qwen3_coder_next_looping_and_opencode/o8j2ffz/ | false | 1 |
t1_o8j298s | This happens because LM Studio's KV cache management truncates the middle of your context when it exceeds the model's working limit. With coding agents, this is especially painful because the prompt prefix keeps shifting between turns, so the cache gets invalidated and rebuilt constantly.
I ran into the same issue and... | 1 | 0 | 2026-03-04T02:59:13 | cryingneko | false | null | 0 | o8j298s | false | /r/LocalLLaMA/comments/1rk9n93/mlxamphibianengine_truncatemiddle_rolling_window/o8j298s/ | false | 1 |
t1_o8j27ac | It has multiple layers of redundancy, did you look at the docs at all or assume you knew my format? I run confidence checks before and audit results after. I saw no bugs, post response logs? | 1 | 0 | 2026-03-04T02:58:54 | emanationinteractive | false | null | 0 | o8j27ac | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8j27ac/ | false | 1 |
t1_o8j24tx | Sorry, I should have been more specific. Yes a video card | 1 | 0 | 2026-03-04T02:58:30 | AdCreative8703 | false | null | 0 | o8j24tx | false | /r/LocalLLaMA/comments/1rk90zw/what_to_pair_with_3080ti_for_qwen_35_27b/o8j24tx/ | false | 1 |
t1_o8j24pu | yeah, it's open sourced MIT license. [github.com/alichherawalla/off-grid-mobile-ai](http://github.com/alichherawalla/off-grid-mobile-ai) | 1 | 0 | 2026-03-04T02:58:29 | alichherawalla | false | null | 0 | o8j24pu | false | /r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8j24pu/ | false | 1 |
t1_o8j226j | Did you try different configurations for SFT? Its not easy to easy to find the right one for your use case. | 1 | 0 | 2026-03-04T02:58:04 | sirfitzwilliamdarcy | false | null | 0 | o8j226j | false | /r/LocalLLaMA/comments/1rk2kcn/i_trained_qwen2515b_with_rlvr_grpo_vs_sft_and/o8j226j/ | false | 1 |
t1_o8j21pr | Sorry, I should’ve been more specific. A second video card, specifically targeting the dense 27b. I can try to find a second 3080 TI for under 500, most 3090s I’ve seen are over 1000 now, or something used/refurbished from the 4000 series. | 1 | 0 | 2026-03-04T02:57:59 | AdCreative8703 | false | null | 0 | o8j21pr | false | /r/LocalLLaMA/comments/1rk90zw/what_to_pair_with_3080ti_for_qwen_35_27b/o8j21pr/ | false | 1 |
t1_o8j20yk | what's app? open sourced? | 2 | 0 | 2026-03-04T02:57:52 | CarpenterHopeful2898 | false | null | 0 | o8j20yk | false | /r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8j20yk/ | false | 2 |
t1_o8j2098 | RnRau, you're making my day with the discussion and wisdom.
This lines up with some empirical data I captured where total throughput with a MoE slowed down going from 12 to 16 simultaneous agents. Though there were other factors so I wrote it off. | 1 | 0 | 2026-03-04T02:57:45 | PentagonUnpadded | false | null | 0 | o8j2098 | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8j2098/ | false | 1 |
t1_o8j1vgm | Thunder Compute is cheaper and cleans up the UX. I'm the CEO, we built the platform to make using GPUs more accessible | 1 | 0 | 2026-03-04T02:56:56 | carl_peterson1 | false | null | 0 | o8j1vgm | false | /r/LocalLLaMA/comments/1pt2cmb/cheaper_alternatives_to_runpod/o8j1vgm/ | false | 1 |
t1_o8j1rbz | Qwen 4b 2507. Thinking and non-thinking. | 1 | 0 | 2026-03-04T02:56:16 | tony10000 | false | null | 0 | o8j1rbz | false | /r/LocalLLaMA/comments/1rfv6ap/what_models_run_well_on_mac_mini_m4_16gb_for_text/o8j1rbz/ | false | 1 |
t1_o8j1klt | This is exactly what I'm thinking. There's already a community that works on this with oss tooling and models. Not clear to me what OP is adding | 1 | 0 | 2026-03-04T02:55:10 | chensium | false | null | 0 | o8j1klt | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8j1klt/ | false | 1 |
t1_o8j1asj | Qwen 3.5 9b fine tuning on this would it be amazing | 1 | 0 | 2026-03-04T02:53:34 | celsowm | false | null | 0 | o8j1asj | false | /r/LocalLLaMA/comments/1rjmnv4/meet_swerebenchv2_the_largest_open_multilingual/o8j1asj/ | false | 1 |
t1_o8j1af4 | Granted you run the local LLM but specifically (just asking out of curiosity) what type of work are you doing that make it worth have the local instance? | 1 | 0 | 2026-03-04T02:53:30 | Fluffy_Ad7392 | false | null | 0 | o8j1af4 | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8j1af4/ | false | 1 |
t1_o8j19yj | Is it still 'good' when disabled?
I often want *some* reasoning. I really like the style of GLM's short, structured thinking. | 1 | 0 | 2026-03-04T02:53:26 | AnticitizenPrime | false | null | 0 | o8j19yj | false | /r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/o8j19yj/ | false | 1 |
t1_o8j18i6 | I am. Like others have said, 3.5 is super impressive. Testing as an OpenClaw orchestrator and damn if it isn’t doing a nice job. I push it a little more every day and so far, real good
The future is definitely local, which makes me real happy. I wanna own the tool, always have. | 1 | 0 | 2026-03-04T02:53:12 | TanguayX | false | null | 0 | o8j18i6 | false | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8j18i6/ | false | 1 |
t1_o8j14yy | Cheers, so I could use the samsung for OS then, but the others aren't usable as they use QLC.
Is the issue with QLC write amplification killing the drive quickly? | 1 | 0 | 2026-03-04T02:52:38 | venman38 | false | null | 0 | o8j14yy | false | /r/LocalLLaMA/comments/1riqlhl/hardware_usage_advice/o8j14yy/ | false | 1 |
t1_o8j136q | You gave it the same name as a completely different thing???
I always find humorous the dumb things that smart people do! | 1 | 0 | 2026-03-04T02:52:21 | __JockY__ | false | null | 0 | o8j136q | false | /r/LocalLLaMA/comments/1rjmnv4/meet_swerebenchv2_the_largest_open_multilingual/o8j136q/ | false | 1 |
t1_o8j0wey | Did you really do all this work on a 3060?
Fairplay! | 1 | 0 | 2026-03-04T02:51:14 | Ok-Measurement-1575 | false | null | 0 | o8j0wey | false | /r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8j0wey/ | false | 1 |
t1_o8j0vg6 | > silent
Yeah, not really. IDK where this marketing myth comes from, in my experience Macbooks are not quite silent when you actually put them under a load. | 1 | 0 | 2026-03-04T02:51:04 | Economy_Cabinet_7719 | false | null | 0 | o8j0vg6 | false | /r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8j0vg6/ | false | 1 |
t1_o8j0u5u | Is this the paper that give LLM a jupyter notebook? | 1 | 0 | 2026-03-04T02:50:51 | o0genesis0o | false | null | 0 | o8j0u5u | false | /r/LocalLLaMA/comments/1rk9bge/improved_on_the_rlm_papers_repl_approach_and/o8j0u5u/ | false | 1 |
t1_o8j0tek | You have to remember that a dense model is 'smarter' than a sparse model at the same sizes. The 27b is much smarter than the 35b A3. The 27b is close to the 122b A10 in terms of capability using the old sqrt(size\*active) formula.
So lets compare those two models using say 8 agents running at the same time. Now in a w... | 1 | 0 | 2026-03-04T02:50:44 | RnRau | false | null | 0 | o8j0tek | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8j0tek/ | false | 1 |
t1_o8j0t1b | Interested, but I have zero hope you'll actually post a repo because if that was your intent you'd have tidied the code _first_ and posted about it on Reddit _second_. Instead you posted a video and got your dopamine hit.
Time will tell! | 1 | 0 | 2026-03-04T02:50:40 | __JockY__ | false | null | 0 | o8j0t1b | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8j0t1b/ | false | 1 |
t1_o8j0rjy | Is this working for the RTX 5080? Can I switch to vLLM or SGLang to take advantage of NVFP4 hardware acceleration? | 1 | 0 | 2026-03-04T02:50:26 | InternationalNebula7 | false | null | 0 | o8j0rjy | false | /r/LocalLLaMA/comments/1rjg514/qwen35_100b_part_ii_nvfp4_blackwell_is_up/o8j0rjy/ | false | 1 |
t1_o8j0nlb | and im sorry i dont know what tuned to the market means......this is my first time using reddit.....im not very good with social midea platforms | 1 | 0 | 2026-03-04T02:49:46 | Mable4200 | false | null | 0 | o8j0nlb | false | /r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8j0nlb/ | false | 1 |
t1_o8j0ha3 | No, it’s just their standard models. The only difference would be the scaffold and additions to the prompt. Default model is sonnet. The chart has an obvious error, if it is claude code +opus vs not claude code, they should indicate that on the chart | 1 | 0 | 2026-03-04T02:48:42 | jtjstock | false | null | 0 | o8j0ha3 | false | /r/LocalLLaMA/comments/1rk5qzz/qwen3codernext_scored_40_on_latest_swerebench/o8j0ha3/ | false | 1 |
t1_o8j0ccp | also nobody has ever showed me how to do any of this.....i know you are all much smarter at all this then i am ...but i thought what i was able to just do with talking to the ai was maybe something that isnt easly done on a public accessed platform..without running codes or introducing hacks.....just through natural co... | 1 | 0 | 2026-03-04T02:47:54 | Mable4200 | false | null | 0 | o8j0ccp | false | /r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8j0ccp/ | false | 1 |
t1_o8j0bzn | Prefill being the biggest pain point for pre-M5, though, it's certainly intriguing! | 1 | 0 | 2026-03-04T02:47:51 | Consumerbot37427 | false | null | 0 | o8j0bzn | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8j0bzn/ | false | 1 |
t1_o8j06lm | I would assume that a certain percentage of the web nowadays includes LLM generated thoughts. | 1 | 0 | 2026-03-04T02:46:58 | Environmental_Form14 | false | null | 0 | o8j06lm | false | /r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/o8j06lm/ | false | 1 |
t1_o8j00fq | Yassss thank you! | 1 | 0 | 2026-03-04T02:45:57 | Borkato | false | null | 0 | o8j00fq | false | /r/LocalLLaMA/comments/1rk74ap/qwen359b_uncensored_aggressive_release_gguf/o8j00fq/ | false | 1 |
t1_o8izwyt | Oh hmmm, that sucks. I'll try it tomorrow. Hopefully they fix it. We are probably all going to need these models the way geopolitics is going.
How is the quality though? | 1 | 0 | 2026-03-04T02:45:23 | inigid | false | null | 0 | o8izwyt | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8izwyt/ | false | 1 |
t1_o8izrpt | I just quantized myself the model from the bare weights and now it is producing a thinking trace and it also seems to be quicker than the unsloth model with the same quantization level. | 1 | 0 | 2026-03-04T02:44:32 | WowSkaro | false | null | 0 | o8izrpt | false | /r/LocalLLaMA/comments/1rjzlrn/are_the_9b_or_smaller_qwen35_models_unthinking/o8izrpt/ | false | 1 |
t1_o8iznsa | i even had the ai walk me through putting its self on my chrome book and it worked......
| 1 | 0 | 2026-03-04T02:43:54 | Mable4200 | false | null | 0 | o8iznsa | false | /r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8iznsa/ | false | 1 |
t1_o8izl9l | it's not in the UI at all. | 1 | 0 | 2026-03-04T02:43:30 | ZootAllures9111 | false | null | 0 | o8izl9l | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o8izl9l/ | false | 1 |
t1_o8izjs1 | fwiw, this is what I get for 27b and 122b (both 6bit) on m4 max 128GB.
Benchmark Model: Qwen3.5-27B-6bit
================================================================================
Single Request Results
--------------------------------------------------------------------------------
Test... | 1 | 0 | 2026-03-04T02:43:15 | slypheed | false | null | 0 | o8izjs1 | false | /r/LocalLLaMA/comments/1rdkze3/m3_ultra_512gb_realworld_performance_of/o8izjs1/ | false | 1 |
t1_o8iziqo | ???
| 1 | 0 | 2026-03-04T02:43:05 | Mable4200 | false | null | 0 | o8iziqo | false | /r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8iziqo/ | false | 1 |
t1_o8izeel | OP writes:
> 1. Fake credentials in HTML comments (only useful if you read and understand natural language)
> 2. Actual prompt injection payloads targeting any LLM that processes the page
It looks like the prompt injection is what is telling the attacking LLM to use the fake credentials.
I would love to see a detai... | 1 | 0 | 2026-03-04T02:42:23 | AnticitizenPrime | false | null | 0 | o8izeel | false | /r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/o8izeel/ | false | 1 |
t1_o8izani | Hi I am Sehyo and thanks!
Last night I made a PR to Heretic, that adds proper support for Qwen 3.5. I noticed that the other PRs are flawed. I made my own heretic version of the 35B so far and tested with MMLU Pro and IFEval and it scored slightly better than the original.. I can probably make a 122B Heretic later tod... | 1 | 0 | 2026-03-04T02:41:46 | VectorD | false | null | 0 | o8izani | false | /r/LocalLLaMA/comments/1rjqff6/sabomakoqwen35122ba10bhereticgguf_hugging_face/o8izani/ | false | 1 |
t1_o8izan9 | **Links to the project:**
* **GitHub:**[https://github.com/awsome-o/grafana-lens](https://github.com/awsome-o/grafana-lens)
* Grafana Stack: [https://github.com/grafana/docker-otel-lgtm](https://github.com/grafana/docker-otel-lgtm)
* **NPM:** `openclaw-grafana-lens` | 1 | 0 | 2026-03-04T02:41:45 | Local-Gazelle2649 | false | null | 0 | o8izan9 | false | /r/LocalLLaMA/comments/1rk9mca/project_i_built_a_selfhosted_grafana/o8izan9/ | false | 1 |
t1_o8iz6gf | 可以输出中文的语音吗? | 1 | 0 | 2026-03-04T02:41:04 | AlternativeCow6833 | false | null | 0 | o8iz6gf | false | /r/LocalLLaMA/comments/1qq401x/i_built_an_opensource_localfirst_voice_cloning/o8iz6gf/ | false | 1 |
t1_o8iyvoc | Yeah it doesn't work on a phone for some reason, but it does on a PC XD | 1 | 0 | 2026-03-04T02:39:18 | c64z86 | false | null | 0 | o8iyvoc | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8iyvoc/ | false | 1 |
t1_o8iytvd | Qwen coder next is faster on my 32 GB gpu. Its not only about the amount of active parameters per token, its also the difference between a reasoner and an instruct model, there isnt thousands of thinking tokens wasted per session. Even if you turn thinking off on the reasoner (thus making it dumber than the instr... | 1 | 0 | 2026-03-04T02:39:00 | brahh85 | false | null | 0 | o8iytvd | false | /r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o8iytvd/ | false | 1 |
t1_o8iyp94 | [https://www.wsj.com/world/china/china-ai-us-travel-advisory-ff248349](https://www.wsj.com/world/china/china-ai-us-travel-advisory-ff248349) | 1 | 0 | 2026-03-04T02:38:15 | Ok_Warning2146 | false | null | 0 | o8iyp94 | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8iyp94/ | false | 1 |
t1_o8iykbd | 9b is up! | 1 | 0 | 2026-03-04T02:37:26 | hauhau901 | false | null | 0 | o8iykbd | false | /r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8iykbd/ | false | 1 |
t1_o8iyht7 |
it isnt a 1 on one conversation with my selfand it isnrt ai genreated i had the ai tell me trhe termonolgy used to explane the exploits
....and im sorry i dont know the termonolgy for the exploits.....i just know that i can interact with any of the ai platforms whit just conversation alone get the ai to do and dsay ... | 1 | 0 | 2026-03-04T02:37:01 | Mable4200 | false | null | 0 | o8iyht7 | false | /r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8iyht7/ | false | 1 |
t1_o8iybdm | it isnt a 1 on one conversation with my self....and im sorry i dont know the termonolgy for the exploits.....i just know that i can interact with any of the ai platforms whit just conversation alone get the ai to do and dsay things that are supposed to be on lock down... | 1 | 0 | 2026-03-04T02:35:57 | Mable4200 | false | null | 0 | o8iybdm | false | /r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8iybdm/ | false | 1 |
t1_o8iy69n | It's certainly a trend, but not quite! Check `allenai/Olmo-3-1125-32B`, I tried that one personally, and it's a genuine Internet snapshot.
The biggest most recent one is `stepfun-ai/Step-3.5-Flash-Base`. I haven't tried it out personally, but they claim it's a truly base model (they have the separate release for the m... | 1 | 0 | 2026-03-04T02:35:07 | FriskyFennecFox | false | null | 0 | o8iy69n | false | /r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/o8iy69n/ | false | 1 |
t1_o8ixyea | Why do I get 20+ tp/s on this model vs ~11 on the non abliterated model of the same unsloth version? | 1 | 0 | 2026-03-04T02:33:49 | bcell4u | false | null | 0 | o8ixyea | false | /r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8ixyea/ | false | 1 |
t1_o8ixxzb |
Two years ago I visited Japan, and during the 14+ hour flight I was using Gemma (the first one, 7b version) on my laptop to brush up on basic conversational Japanese, offline, at 40,000 feet flying over Alaska and the Kuril islands. And we've come a long way in the two years since.
I think it's incredible that I ca... | 1 | 0 | 2026-03-04T02:33:45 | AnticitizenPrime | false | null | 0 | o8ixxzb | false | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8ixxzb/ | false | 1 |
t1_o8ixwwu | It's China street rules, bud. Been there seen that. | 1 | 0 | 2026-03-04T02:33:35 | TomLucidor | false | null | 0 | o8ixwwu | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8ixwwu/ | false | 1 |
t1_o8ixtog | That’s tame. Jailbroken Clause is a sex pest par execellece with homocidal ideation | 1 | 0 | 2026-03-04T02:33:03 | 1-800-methdyke | false | null | 0 | o8ixtog | false | /r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8ixtog/ | false | 1 |
t1_o8ixovl | 有没有中文界面? | 1 | 0 | 2026-03-04T02:32:16 | AlternativeCow6833 | false | null | 0 | o8ixovl | false | /r/LocalLLaMA/comments/1qq401x/i_built_an_opensource_localfirst_voice_cloning/o8ixovl/ | false | 1 |
t1_o8ixnxm | you mean like another card? 3090 is still best value for vram. if you want meaningful ctx size you are gonna want 24+ ..
pairing a 5x series with a 3x series is just going to slow down the 5 series so just get a used 3090 | 1 | 0 | 2026-03-04T02:32:08 | arthor | false | null | 0 | o8ixnxm | false | /r/LocalLLaMA/comments/1rk90zw/what_to_pair_with_3080ti_for_qwen_35_27b/o8ixnxm/ | false | 1 |
t1_o8ixnwe | Pretty soon the only thing the human is needed for is to assume legal responsibility for signing off on something. AI agents could synthesize everything and then hand the complete analysis over to a human.
Goodbye white collar jobs... | 1 | 0 | 2026-03-04T02:32:07 | SkyFeistyLlama8 | false | null | 0 | o8ixnwe | false | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8ixnwe/ | false | 1 |
t1_o8ixllu | and i dont know how to use softwear or scripts to make the ai do this stuff....i just talk it into doing this stuff ...on any platform
| 1 | 0 | 2026-03-04T02:31:45 | Mable4200 | false | null | 0 | o8ixllu | false | /r/LocalLLaMA/comments/1rk4ba9/crossplatform_discovery_total_refusal_bypass_via/o8ixllu/ | false | 1 |
t1_o8ixf1m | This | 1 | 0 | 2026-03-04T02:30:40 | 1-800-methdyke | false | null | 0 | o8ixf1m | false | /r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8ixf1m/ | false | 1 |
t1_o8ixetm | I was using the 35B MOE for everything but I think I'll switch to your approach. I'm already using Granite Micro 3B or Qwen 3 4B on NPU for quick summaries and simple RAG. I'll add the dense 27B as a synthesis agent. Previously I was using Mistral Small 3.2 24B for that, any comparisons between the Mistral and new Qwen... | 1 | 0 | 2026-03-04T02:30:38 | SkyFeistyLlama8 | false | null | 0 | o8ixetm | false | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8ixetm/ | false | 1 |
t1_o8ixd59 | Add more ram. I got a 3080ti and added about 64gb to get to a total of 96gb. Bought it cheap.
I can handle 3.5 27b and qwen3 coder next which is 80b.
Q4 models I'm prompting at 1400 tokens per second. | 1 | 0 | 2026-03-04T02:30:22 | nakedspirax | false | null | 0 | o8ixd59 | false | /r/LocalLLaMA/comments/1rk90zw/what_to_pair_with_3080ti_for_qwen_35_27b/o8ixd59/ | false | 1 |
t1_o8ix28h | Just for the record, it was only one author behind norm-preserving biprojected abliteration. | 1 | 0 | 2026-03-04T02:28:35 | grimjim | false | null | 0 | o8ix28h | false | /r/LocalLLaMA/comments/1rh69co/multidirectional_refusal_suppression_with/o8ix28h/ | false | 1 |
t1_o8ix1rs | Sounds like Alibaba’s leadership doesn’t undershot WHY Qwen is successful.
It will do terribly as a closed model | 1 | 0 | 2026-03-04T02:28:30 | ObjectiveOctopus2 | false | null | 0 | o8ix1rs | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8ix1rs/ | false | 1 |
t1_o8ix1iy | Well, if u become famous when u r outside of China, of course, then u r not under this restriction. Apparently, JYL does not fall under this case. | 1 | 0 | 2026-03-04T02:28:28 | Ok_Warning2146 | false | null | 0 | o8ix1iy | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8ix1iy/ | false | 1 |
t1_o8iwxoi | It would be more convincing if this wasn’t ai-generated.
I get that you probably put a lot of original work into the prompt that generated this, but it feels tuned to market instead of inform.
Jailbreaking and coherent memory strategies are constantly evolving and it’s good that people share their work on what they... | 1 | 0 | 2026-03-04T02:27:51 | Simulacra93 | false | null | 0 | o8iwxoi | false | /r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8iwxoi/ | false | 1 |
t1_o8iwvo6 | Have a 1 on 1 conversation with yourself, You've solved it! In all actuality you are suffering from AI psychosis. A sycophant AI and 8 hours of context overfill sounds like a dream. | 1 | 0 | 2026-03-04T02:27:31 | l33t-Mt | false | null | 0 | o8iwvo6 | false | /r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8iwvo6/ | false | 1 |
t1_o8iwpp1 | I suggested the very same on another sub a while back and got down voted to oblivion. | 1 | 0 | 2026-03-04T02:26:32 | roosterfareye | false | null | 0 | o8iwpp1 | false | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8iwpp1/ | false | 1 |
t1_o8iwjk2 | sure, there must be no Chinese in Anthropic/OpenAI/Google team. | 1 | 0 | 2026-03-04T02:25:34 | Key_Papaya2972 | false | null | 0 | o8iwjk2 | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8iwjk2/ | false | 1 |
t1_o8iw8g9 | Same. Debating 4TB vs 8TB. But definitely 128Gb RAM | 1 | 0 | 2026-03-04T02:23:45 | 1-800-methdyke | false | null | 0 | o8iw8g9 | false | /r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8iw8g9/ | false | 1 |
t1_o8iw7wh | Pmd, let’s run this. | 1 | 0 | 2026-03-04T02:23:40 | neoescape | false | null | 0 | o8iw7wh | false | /r/LocalLLaMA/comments/1ri0v3e/anyone_need_a_12channel_ddr5_rdimm_ram_set_for_an/o8iw7wh/ | false | 1 |
t1_o8iw5c3 | Update with full test scope so far (all runs done under fixed prompt/seed/flags):
Hardware/topology:
\- 2x RTX 3090 on B550 (non-P2P / \`NO\_PEER\_COPY = 1\`)
\- Linux + CUDA
\- \`--split-mode layer -ngl 999\`
\- fixed params: \`--seed 123 --temp 0 --top-k 1 --top-p 1.0 --flash-attn on\`
\- prompt: "Continue this... | 1 | 0 | 2026-03-04T02:23:16 | MaleficentMention703 | false | null | 0 | o8iw5c3 | false | /r/LocalLLaMA/comments/1rjdeat/dual_rtx_3090_on_b550_70b_models_produce_garbage/o8iw5c3/ | false | 1 |
t1_o8iw54e | Did you check the updates that Unsloth put out for the jinja? It might help and you can also increase the repetition penalty to something like 1.1 to see if that helps. | 1 | 0 | 2026-03-04T02:23:13 | knownboyofno | false | null | 0 | o8iw54e | false | /r/LocalLLaMA/comments/1rk8knf/qwen3518breapa3bcoding_50_expertpruned/o8iw54e/ | false | 1 |
t1_o8iw35m | $200 discount? | 1 | 0 | 2026-03-04T02:22:54 | 1-800-methdyke | false | null | 0 | o8iw35m | false | /r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8iw35m/ | false | 1 |
t1_o8ivkhl | happy to hear that! Will DM | 1 | 0 | 2026-03-04T02:19:52 | alichherawalla | false | null | 0 | o8ivkhl | false | /r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8ivkhl/ | false | 1 |
t1_o8ivk2u | i think UD\_IQ3 quant would be worth it it u can fully offload to GPU.
I quants tend to preserve performance more for STEM/Coding, so depends on your use case. | 1 | 0 | 2026-03-04T02:19:48 | Far-Low-4705 | false | null | 0 | o8ivk2u | false | /r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ivk2u/ | false | 1 |
t1_o8ivjnw | You missed the headline: SSD in M5 Max MacBook Pros delivers over 14.5GB/s read and write speeds, making it roughly 2–2.5x faster than the SSD in last generation M4-based models, depending on the specific test. | 1 | 0 | 2026-03-04T02:19:43 | 1-800-methdyke | false | null | 0 | o8ivjnw | false | /r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8ivjnw/ | false | 1 |
t1_o8ivh2u | i dont know what heretic is......are you saying that its easy to do this and its not a skill that is looked for.....sorry im very new to this stuff
| 1 | 0 | 2026-03-04T02:19:18 | Mable4200 | false | null | 0 | o8ivh2u | false | /r/LocalLLaMA/comments/1rk4ba9/crossplatform_discovery_total_refusal_bypass_via/o8ivh2u/ | false | 1 |
t1_o8ivggu | On my way hone from work rn, will upload when I get home. Also I forgot to mention that my flappy bird test was performed on a Q4_K_M GGUF, which took about 90% of my VRAM. | 1 | 0 | 2026-03-04T02:19:13 | 17hoehbr | false | null | 0 | o8ivggu | false | /r/LocalLLaMA/comments/1rk8knf/qwen3518breapa3bcoding_50_expertpruned/o8ivggu/ | false | 1 |
t1_o8ive7b | They probably just want to switched to closed source 🤔 https://x.com/kevinsxu/status/2028926776605389165 | 1 | 0 | 2026-03-04T02:18:51 | ANR2ME | false | null | 0 | o8ive7b | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8ive7b/ | false | 1 |
t1_o8iv5tu | Incredible how it reached the right conclusion multiple times, but was so convinced that couldn’t possibly be right for seemingly no reason | 1 | 0 | 2026-03-04T02:17:27 | Fit_West_8253 | false | null | 0 | o8iv5tu | false | /r/LocalLLaMA/comments/1rk631c/qwen35_9b_q4_k_m_car_wash_philosophy_if_someone/o8iv5tu/ | false | 1 |
End of preview. Expand in Data Studio
LocalLLaMA-comments
A companion dataset to pszemraj/LocalLLaMA-posts. Time frame is in sync (up through Tue Mar 3 9PM EST 2026)
- Downloads last month
- 21