gabriele97 12mo ago • 100%
Oh I didn't know they were aware of that, awesome!
gabriele97 12mo ago • 0%
I don't understand why there are a lot of posts here not answered, like there are no users (?)
gabriele97 1y ago • 100%
League of Legends works perfectly on Linux
gabriele97 1y ago • 0%
Are you sure it's stuck and it's not doing nothing (disk io, CPU usage)?
gabriele97 1y ago • 100%
The recent update broke everything (it automatically locks on app exit both on Android and iOS, even if the automatic lock is set to never)
gabriele97 1y ago • 100%
Lol what happened to that bot?
gabriele97 1y ago • 100%
How is it possible that users noticed strange behaviors (new Cron jobs) and they didn't check the script launched by those jobs 😱
gabriele97 1y ago • 100%
OT: what happened to all those beautiful mods? Are they still there? I remember IndustrialCraft, Build raft, there was a mod that introduced bees (?) And another one called Logistic Pipes. Damn I was completely in love with those logistic Pipes
gabriele97 1y ago • 100%
Lords of the fallen 2014, because I never played it in about 10 years since it's release
gabriele97 1y ago • 83%
An option to have separate up/down vote buttons
gabriele97 1y ago • 100%
I am new to the alias world so I've a question. How can I be sure that an alias provider doesn't have access to my emails when they are forwarded?
gabriele97 1y ago • 100%
Thanks for the link! A very pleasant read
gabriele97 1y ago • 100%
It would be great to have one
gabriele97 1y ago • 100%
For what concerns flatpak, did you try flatpak remove --unused?
Edit: I didn't read you already did it, nvm
gabriele97 1y ago • 100%
It needs Firefox Beta or Nighty to be installed, right?
gabriele97 1y ago • 100%
Here in Italy we only drive manual
gabriele97 1y ago • 100%
It's not supported (yet). It's very sad.
gabriele97 1y ago • 50%
No.
EDIT: did it now for the first time
gabriele97 1y ago • 100%
Thank you! I like those weekly updates posts
Hi, are prometheus metrics enabled by default in the backend code or someone needs to compile by himself a build with the right flags to enable it? Thank you
cross-posted from: https://lemmy.g97.top/post/82608 > Hi, > I want to upgrade my nextcloud instance from version 26.0.5 to 27.0.2. Any advice? In particular for what concerns software versions: actually I've installed php 8.1.18, postgreSQL 13.11, redis 6.0.16. Do I need to upgrade them? > > Thank you!
Hi, I want to upgrade my nextcloud instance from version 26.0.5 to 27.0.2. Any advice? In particular for what concerns software versions: actually I've installed php 8.1.18, postgreSQL 13.11, redis 6.0.16. Do I need to upgrade them? Thank you!
Am I missing something or there isn't a way to show separately the count of up and down votes?
cross-posted from: https://lemmy.g97.top/post/56902 > Hi, > I've this situation when I apt upgrade. There are many pipewire-related packages kept back. Why? How can I solve it? > > Thank you! EDIT: dist-upgrade summary ![](https://lemmy.g97.top/pictrs/image/c446dc2c-08bd-4bf6-86bc-ade7ca0b7a7a.png) EDIT2: Ok I solved it with `apt autoremove # only to remove old packages. it didn't solved the problem` `apt update` `apt dist-upgrade`
Hi, I've this situation when I apt upgrade. There are many pipewire-related packages kept back. Why? How can I solve it? Thank you!
cross-posted from: https://lemmy.g97.top/post/45846 > It's an... FPGA (?)
It's an... FPGA (?)
Why? I don't know, maybe someone here will like it.
Hi! I would like to monitor accesses to my services done through my nginx setup that acts as reverse proxy. I've tried fluentd to export Prometheus metrics with fluend to scrape the access.log file produced by nginx. The problem is that it uses a lot of resources as I can see (130+ MB of RAM only for fluentd, and each http request towards the fluentd metrics endpoint transfers a lot of data and I assume It Will increase consequently with the increase of the access.log file). Is there any "embedded" scraper with integrated dashboard that I can recall on demand? For example, one of the problem with the previous solution is that peometheus scrapes the metrics every 5s so I have a constant transfer of different MBs every second due to the size of the metrics, even if I am not looking at grafana (my frontend for Prometheus). With an on demand service I would scrape the access.log file only when I am using the on demand service. Thank you
cross-posted from: https://lemmy.g97.top/post/5839 > EDIT: this is a full benchmark I run on my pool: [https://gist.github.com/thegabriele97/9d82ddfbf0f4ec00dbcebc4d6cda29b3](https://gist.github.com/thegabriele97/9d82ddfbf0f4ec00dbcebc4d6cda29b3). > > Hi! > I ran into this issue since I started mu homelab adventure a couple of months ago, so I am still very noob, sorry for this. > > I decided today to understand what happens and why it happens but I need your help to understand it better. > > My homelab consists of a proxmox setup with three 1 TB HDD s in raidz1 (ZFS) (I know the downsides of this and I took my decisions) and 8 GB of RAM, of which 3.5 are assigned to a VM. The remaining parts are used by some LXC containers. > > During high worloads (i.e. copying a file, downloading something via torrent/jdownloader) everything is very slow and other services start to be unresponsive due to the high IO delay. > > I decided to test the three single devices with this command: > `fio --ioengine=libaio --filename=/dev/sda --size=4G --time_based --name=fio --group_reporting --runtime=10 --direct=1 --sync=1 --iodepth=1 --rw=randread --bs=4k --numjobs=32` > > And more or less they (sda, sdb, sdc) give this results: > > ``` > Jobs: 32 (f=32): [r(32)][100.0%][r=436KiB/s][r=109 IOPS][eta 00m:00s] > fio: (groupid=0, jobs=32): err= 0: pid=3350293: Sat Jun 24 11:07:02 2023 > read: IOPS=119, BW=479KiB/s (490kB/s)(4968KiB/10378msec) > slat (nsec): min=4410, max=40660, avg=12374.56, stdev=5066.56 > clat (msec): min=17, max=780, avg=260.78, stdev=132.27 > lat (msec): min=17, max=780, avg=260.79, stdev=132.27 > clat percentiles (msec): > | 1.00th=[ 26], 5.00th=[ 50], 10.00th=[ 80], 20.00th=[ 140], > | 30.00th=[ 188], 40.00th=[ 230], 50.00th=[ 264], 60.00th=[ 296], > | 70.00th=[ 326], 80.00th=[ 372], 90.00th=[ 430], 95.00th=[ 477], > | 99.00th=[ 617], 99.50th=[ 634], 99.90th=[ 768], 99.95th=[ 785], > | 99.99th=[ 785] > bw ( KiB/s): min= 256, max= 904, per=100.00%, avg=484.71, stdev= 6.17, samples=639 > iops : min= 64, max= 226, avg=121.14, stdev= 1.54, samples=639 > lat (msec) : 20=0.32%, 50=4.91%, 100=8.13%, 250=32.85%, 500=49.68% > lat (msec) : 750=3.86%, 1000=0.24% > cpu : usr=0.01%, sys=0.00%, ctx=1246, majf=11, minf=562 > IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% > submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% > complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% > issued rwts: total=1242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 > latency : target=0, window=0, percentile=100.00%, depth=1 > > Run status group 0 (all jobs): > READ: bw=479KiB/s (490kB/s), 479KiB/s-479KiB/s (490kB/s-490kB/s), io=4968KiB (5087kB), run=10378-10378msec > > Disk stats (read/write): > sda: ios=1470/89, merge=6/7, ticks=385624/14369, in_queue=405546, util=96.66% > ``` > > Am I wrong or it is a very bad results? Why? The three identical HDs are this one: [https://smarthdd.com/database/APPLE-HDD-HTS541010A9E662/JA0AB560/](https://smarthdd.com/database/APPLE-HDD-HTS541010A9E662/JA0AB560/) > > I jope you can help me. Thank you!
EDIT: this is a full benchmark I run on my pool: [https://gist.github.com/thegabriele97/9d82ddfbf0f4ec00dbcebc4d6cda29b3](https://gist.github.com/thegabriele97/9d82ddfbf0f4ec00dbcebc4d6cda29b3). Hi! I ran into this issue since I started mu homelab adventure a couple of months ago, so I am still very noob, sorry for this. I decided today to understand what happens and why it happens but I need your help to understand it better. My homelab consists of a proxmox setup with three 1 TB HDD s in raidz1 (ZFS) (I know the downsides of this and I took my decisions) and 8 GB of RAM, of which 3.5 are assigned to a VM. The remaining parts are used by some LXC containers. During high worloads (i.e. copying a file, downloading something via torrent/jdownloader) everything is very slow and other services start to be unresponsive due to the high IO delay. I decided to test the three single devices with this command: `fio --ioengine=libaio --filename=/dev/sda --size=4G --time_based --name=fio --group_reporting --runtime=10 --direct=1 --sync=1 --iodepth=1 --rw=randread --bs=4k --numjobs=32` And more or less they (sda, sdb, sdc) give this results: ``` Jobs: 32 (f=32): [r(32)][100.0%][r=436KiB/s][r=109 IOPS][eta 00m:00s] fio: (groupid=0, jobs=32): err= 0: pid=3350293: Sat Jun 24 11:07:02 2023 read: IOPS=119, BW=479KiB/s (490kB/s)(4968KiB/10378msec) slat (nsec): min=4410, max=40660, avg=12374.56, stdev=5066.56 clat (msec): min=17, max=780, avg=260.78, stdev=132.27 lat (msec): min=17, max=780, avg=260.79, stdev=132.27 clat percentiles (msec): | 1.00th=[ 26], 5.00th=[ 50], 10.00th=[ 80], 20.00th=[ 140], | 30.00th=[ 188], 40.00th=[ 230], 50.00th=[ 264], 60.00th=[ 296], | 70.00th=[ 326], 80.00th=[ 372], 90.00th=[ 430], 95.00th=[ 477], | 99.00th=[ 617], 99.50th=[ 634], 99.90th=[ 768], 99.95th=[ 785], | 99.99th=[ 785] bw ( KiB/s): min= 256, max= 904, per=100.00%, avg=484.71, stdev= 6.17, samples=639 iops : min= 64, max= 226, avg=121.14, stdev= 1.54, samples=639 lat (msec) : 20=0.32%, 50=4.91%, 100=8.13%, 250=32.85%, 500=49.68% lat (msec) : 750=3.86%, 1000=0.24% cpu : usr=0.01%, sys=0.00%, ctx=1246, majf=11, minf=562 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=479KiB/s (490kB/s), 479KiB/s-479KiB/s (490kB/s-490kB/s), io=4968KiB (5087kB), run=10378-10378msec Disk stats (read/write): sda: ios=1470/89, merge=6/7, ticks=385624/14369, in_queue=405546, util=96.66% ``` Am I wrong or it is a very bad results? Why? The three identical HDs are this one: [https://smarthdd.com/database/APPLE-HDD-HTS541010A9E662/JA0AB560/](https://smarthdd.com/database/APPLE-HDD-HTS541010A9E662/JA0AB560/) I jope you can help me. Thank you!