Juan Martin Balam Peraza
Comentado en México el 7 de marzo de 2025
Se mueve, si tu intención es usar el touch del ipad mientras trabajas, tal vez no sea la mejor idea, se mueve el iPad.He optado por ponerle cinta y kola loka para que sea más resistente, también le puse un plástico para reforzarlo. Y así ya me siento a gusto trabajando.
D
Comentado en los Estados Unidos el 8 de enero de 2025
I bought this drive on Saturday, September 14, 2024. Monday, January 6, 2025, a bunch of the files on my server were gone. After investigation I find my newest hard drive isn't showing in bios, and emitting the click of death. Make certain you don't trust your data to these drives, because at least in my case, it had a lifespan of 115 days. According to my monitoring software, it went from zero errors to just not responding. No warning.I'm really upset. I'm never getting my data back. That's the true cost of trusting cheap drives.
D
Comentado en los Estados Unidos el 8 de enero de 2025
I bought this drive on Saturday, September 14, 2024. Monday, January 6, 2025, a bunch of the files on my server were gone. After investigation I find my newest hard drive isn't showing in bios, and emitting the click of death. Make certain you don't trust your data to these drives, because at least in my case, it had a lifespan of 115 days. According to my monitoring software, it went from zero errors to just not responding. No warning.I'm really upset. I'm never getting my data back. That's the true cost of trusting cheap drives.
D
Comentado en los Estados Unidos el 8 de enero de 2025
I bought this drive on Saturday, September 14, 2024. Monday, January 6, 2025, a bunch of the files on my server were gone. After investigation I find my newest hard drive isn't showing in bios, and emitting the click of death. Make certain you don't trust your data to these drives, because at least in my case, it had a lifespan of 115 days. According to my monitoring software, it went from zero errors to just not responding. No warning.I'm really upset. I'm never getting my data back. That's the true cost of trusting cheap drives.
D
Comentado en los Estados Unidos el 8 de enero de 2025
I bought this drive on Saturday, September 14, 2024. Monday, January 6, 2025, a bunch of the files on my server were gone. After investigation I find my newest hard drive isn't showing in bios, and emitting the click of death. Make certain you don't trust your data to these drives, because at least in my case, it had a lifespan of 115 days. According to my monitoring software, it went from zero errors to just not responding. No warning.I'm really upset. I'm never getting my data back. That's the true cost of trusting cheap drives.
D
Comentado en los Estados Unidos el 8 de enero de 2025
I bought this drive on Saturday, September 14, 2024. Monday, January 6, 2025, a bunch of the files on my server were gone. After investigation I find my newest hard drive isn't showing in bios, and emitting the click of death. Make certain you don't trust your data to these drives, because at least in my case, it had a lifespan of 115 days. According to my monitoring software, it went from zero errors to just not responding. No warning.I'm really upset. I'm never getting my data back. That's the true cost of trusting cheap drives.
D
Comentado en los Estados Unidos el 8 de enero de 2025
I bought this drive on Saturday, September 14, 2024. Monday, January 6, 2025, a bunch of the files on my server were gone. After investigation I find my newest hard drive isn't showing in bios, and emitting the click of death. Make certain you don't trust your data to these drives, because at least in my case, it had a lifespan of 115 days. According to my monitoring software, it went from zero errors to just not responding. No warning.I'm really upset. I'm never getting my data back. That's the true cost of trusting cheap drives.
D
Comentado en los Estados Unidos el 8 de enero de 2025
I bought this drive on Saturday, September 14, 2024. Monday, January 6, 2025, a bunch of the files on my server were gone. After investigation I find my newest hard drive isn't showing in bios, and emitting the click of death. Make certain you don't trust your data to these drives, because at least in my case, it had a lifespan of 115 days. According to my monitoring software, it went from zero errors to just not responding. No warning.I'm really upset. I'm never getting my data back. That's the true cost of trusting cheap drives.
D
Comentado en los Estados Unidos el 8 de enero de 2025
I bought this drive on Saturday, September 14, 2024. Monday, January 6, 2025, a bunch of the files on my server were gone. After investigation I find my newest hard drive isn't showing in bios, and emitting the click of death. Make certain you don't trust your data to these drives, because at least in my case, it had a lifespan of 115 days. According to my monitoring software, it went from zero errors to just not responding. No warning.I'm really upset. I'm never getting my data back. That's the true cost of trusting cheap drives.
Army Navy Guns
Comentado en los Estados Unidos el 2 de agosto de 2024
A very good price for a new internal hard drive.
Army Navy Guns
Comentado en los Estados Unidos el 2 de agosto de 2024
A very good price for a new internal hard drive.
Army Navy Guns
Comentado en los Estados Unidos el 2 de agosto de 2024
A very good price for a new internal hard drive.
Army Navy Guns
Comentado en los Estados Unidos el 2 de agosto de 2024
A very good price for a new internal hard drive.
Army Navy Guns
Comentado en los Estados Unidos el 2 de agosto de 2024
A very good price for a new internal hard drive.
Army Navy Guns
Comentado en los Estados Unidos el 2 de agosto de 2024
A very good price for a new internal hard drive.
Army Navy Guns
Comentado en los Estados Unidos el 2 de agosto de 2024
A very good price for a new internal hard drive.
Army Navy Guns
Comentado en los Estados Unidos el 2 de agosto de 2024
A very good price for a new internal hard drive.
buggy
Comentado en los Estados Unidos el 10 de julio de 2024
Recognized by the disk manager. Appropriate noise level. No complaints so far after a month of use.
buggy
Comentado en los Estados Unidos el 10 de julio de 2024
Recognized by the disk manager. Appropriate noise level. No complaints so far after a month of use.
buggy
Comentado en los Estados Unidos el 10 de julio de 2024
Recognized by the disk manager. Appropriate noise level. No complaints so far after a month of use.
buggy
Comentado en los Estados Unidos el 10 de julio de 2024
Recognized by the disk manager. Appropriate noise level. No complaints so far after a month of use.
buggy
Comentado en los Estados Unidos el 10 de julio de 2024
Recognized by the disk manager. Appropriate noise level. No complaints so far after a month of use.
buggy
Comentado en los Estados Unidos el 10 de julio de 2024
Recognized by the disk manager. Appropriate noise level. No complaints so far after a month of use.
buggy
Comentado en los Estados Unidos el 10 de julio de 2024
Recognized by the disk manager. Appropriate noise level. No complaints so far after a month of use.
buggy
Comentado en los Estados Unidos el 10 de julio de 2024
Recognized by the disk manager. Appropriate noise level. No complaints so far after a month of use.
Shlomo
Comentado en México el 24 de junio de 2024
Te permite ver el celular es de buena calidad durable, lo recomiendo 100%
Susana Tánori
Comentado en México el 6 de diciembre de 2024
Un buen soporte, práctico, liviano y portátil
Oswaldo H.
Comentado en México el 12 de diciembre de 2024
Sirve perfecto para lo que se requiere, sostiene bien el celular pero hubiera preferido que estuviera más macizo/rígido en las bisagras ya que no puedes usar tu cel estando en el soporte porque la presión de tu dedo (al tocar la pantalla) hace que la parte donde se pone el cel se mueva fácilmente
John Smith
Comentado en los Estados Unidos el 7 de enero de 2024
I've had a pair of these running in a NAS as backup for about 2 years so far, no significant issues - so I bought another, and then 6 more 40 days after that one.Of the initial 2, only one of them has any reallocated sectors at all. 13 total, over a 2 year lifespan under regular use. Very decent. As of 11AM EST today, all of these drives just completed a full-disk dd with no issues whatsoever (writing 1s) - and the drive with the 13 bad sectors has not found any more. Time will tell.Load_Cycle_Count increment is the only problem I have with these drives, and this is not an MDD problem.For the uninitiated, these are Seagate drives underwater. Seagate implements some proprietary power-saving - they are NOT compatible with APM (IE: via hdparm). So telling these drives to stop parking the heads via normal methods (ie: hdparm -B 255) will not work, which I did not notice for 2 years.As you can see, /dev/sda and /dev/sdf are the drives which were initially purchased. Over a 2 year lifespan, the load cycle count of /dev/sda and /dev/sdf is fairly extreme - and there are 11526 Load_Cycle increments in just 40 days for /dev/sdh:# for i in {a..i}; do echo /dev/sd$i && smartctl -a /dev/sd$i | grep Load_Cycle; done/dev/sda193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always - 206445/dev/sdb193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdc193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 11/dev/sdd193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 13/dev/sde193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdf193 Load_Cycle_Count 0x0032 013 013 000 Old_age Always - 175988/dev/sdg193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdh193 Load_Cycle_Count 0x0032 095 095 000 Old_age Always - 11526/dev/sdi193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7Power-on hours for /dev/sda and /dev/sdf are 17289.Power-on hours for /dev/sdh is 983.Power-on hours for /dev/sdc and /dev/sdd is 22.Power-on hours for the remaining 4 is 16.So, obviously, this is completely insane. The drive will wear out long before the actual medium degrades at this rate.To stop this nonsense: Explicitly disable idle_b state on the drive via SeaChest (Windows) or openSeaChest (Linux, find it on GitHub) to stop the drives from parking the heads so aggressively: # openSeaChest_PowerControl -d all --idle_b disableArguments in favor of excessive head parking can be made, and I'm sure some people are very convinced by them.These people are wrong.If you're shoving a bunch of these in a rack and not jumping up and down while writing data to them, then there's no reason to park your heads this often.I will be buying more of these.
John Smith
Comentado en los Estados Unidos el 7 de enero de 2024
I've had a pair of these running in a NAS as backup for about 2 years so far, no significant issues - so I bought another, and then 6 more 40 days after that one.Of the initial 2, only one of them has any reallocated sectors at all. 13 total, over a 2 year lifespan under regular use. Very decent. As of 11AM EST today, all of these drives just completed a full-disk dd with no issues whatsoever (writing 1s) - and the drive with the 13 bad sectors has not found any more. Time will tell.Load_Cycle_Count increment is the only problem I have with these drives, and this is not an MDD problem.For the uninitiated, these are Seagate drives underwater. Seagate implements some proprietary power-saving - they are NOT compatible with APM (IE: via hdparm). So telling these drives to stop parking the heads via normal methods (ie: hdparm -B 255) will not work, which I did not notice for 2 years.As you can see, /dev/sda and /dev/sdf are the drives which were initially purchased. Over a 2 year lifespan, the load cycle count of /dev/sda and /dev/sdf is fairly extreme - and there are 11526 Load_Cycle increments in just 40 days for /dev/sdh:# for i in {a..i}; do echo /dev/sd$i && smartctl -a /dev/sd$i | grep Load_Cycle; done/dev/sda193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always - 206445/dev/sdb193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdc193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 11/dev/sdd193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 13/dev/sde193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdf193 Load_Cycle_Count 0x0032 013 013 000 Old_age Always - 175988/dev/sdg193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdh193 Load_Cycle_Count 0x0032 095 095 000 Old_age Always - 11526/dev/sdi193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7Power-on hours for /dev/sda and /dev/sdf are 17289.Power-on hours for /dev/sdh is 983.Power-on hours for /dev/sdc and /dev/sdd is 22.Power-on hours for the remaining 4 is 16.So, obviously, this is completely insane. The drive will wear out long before the actual medium degrades at this rate.To stop this nonsense: Explicitly disable idle_b state on the drive via SeaChest (Windows) or openSeaChest (Linux, find it on GitHub) to stop the drives from parking the heads so aggressively: # openSeaChest_PowerControl -d all --idle_b disableArguments in favor of excessive head parking can be made, and I'm sure some people are very convinced by them.These people are wrong.If you're shoving a bunch of these in a rack and not jumping up and down while writing data to them, then there's no reason to park your heads this often.I will be buying more of these.
John Smith
Comentado en los Estados Unidos el 7 de enero de 2024
I've had a pair of these running in a NAS as backup for about 2 years so far, no significant issues - so I bought another, and then 6 more 40 days after that one.Of the initial 2, only one of them has any reallocated sectors at all. 13 total, over a 2 year lifespan under regular use. Very decent. As of 11AM EST today, all of these drives just completed a full-disk dd with no issues whatsoever (writing 1s) - and the drive with the 13 bad sectors has not found any more. Time will tell.Load_Cycle_Count increment is the only problem I have with these drives, and this is not an MDD problem.For the uninitiated, these are Seagate drives underwater. Seagate implements some proprietary power-saving - they are NOT compatible with APM (IE: via hdparm). So telling these drives to stop parking the heads via normal methods (ie: hdparm -B 255) will not work, which I did not notice for 2 years.As you can see, /dev/sda and /dev/sdf are the drives which were initially purchased. Over a 2 year lifespan, the load cycle count of /dev/sda and /dev/sdf is fairly extreme - and there are 11526 Load_Cycle increments in just 40 days for /dev/sdh:# for i in {a..i}; do echo /dev/sd$i && smartctl -a /dev/sd$i | grep Load_Cycle; done/dev/sda193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always - 206445/dev/sdb193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdc193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 11/dev/sdd193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 13/dev/sde193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdf193 Load_Cycle_Count 0x0032 013 013 000 Old_age Always - 175988/dev/sdg193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdh193 Load_Cycle_Count 0x0032 095 095 000 Old_age Always - 11526/dev/sdi193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7Power-on hours for /dev/sda and /dev/sdf are 17289.Power-on hours for /dev/sdh is 983.Power-on hours for /dev/sdc and /dev/sdd is 22.Power-on hours for the remaining 4 is 16.So, obviously, this is completely insane. The drive will wear out long before the actual medium degrades at this rate.To stop this nonsense: Explicitly disable idle_b state on the drive via SeaChest (Windows) or openSeaChest (Linux, find it on GitHub) to stop the drives from parking the heads so aggressively: # openSeaChest_PowerControl -d all --idle_b disableArguments in favor of excessive head parking can be made, and I'm sure some people are very convinced by them.These people are wrong.If you're shoving a bunch of these in a rack and not jumping up and down while writing data to them, then there's no reason to park your heads this often.I will be buying more of these.
John Smith
Comentado en los Estados Unidos el 7 de enero de 2024
I've had a pair of these running in a NAS as backup for about 2 years so far, no significant issues - so I bought another, and then 6 more 40 days after that one.Of the initial 2, only one of them has any reallocated sectors at all. 13 total, over a 2 year lifespan under regular use. Very decent. As of 11AM EST today, all of these drives just completed a full-disk dd with no issues whatsoever (writing 1s) - and the drive with the 13 bad sectors has not found any more. Time will tell.Load_Cycle_Count increment is the only problem I have with these drives, and this is not an MDD problem.For the uninitiated, these are Seagate drives underwater. Seagate implements some proprietary power-saving - they are NOT compatible with APM (IE: via hdparm). So telling these drives to stop parking the heads via normal methods (ie: hdparm -B 255) will not work, which I did not notice for 2 years.As you can see, /dev/sda and /dev/sdf are the drives which were initially purchased. Over a 2 year lifespan, the load cycle count of /dev/sda and /dev/sdf is fairly extreme - and there are 11526 Load_Cycle increments in just 40 days for /dev/sdh:# for i in {a..i}; do echo /dev/sd$i && smartctl -a /dev/sd$i | grep Load_Cycle; done/dev/sda193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always - 206445/dev/sdb193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdc193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 11/dev/sdd193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 13/dev/sde193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdf193 Load_Cycle_Count 0x0032 013 013 000 Old_age Always - 175988/dev/sdg193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdh193 Load_Cycle_Count 0x0032 095 095 000 Old_age Always - 11526/dev/sdi193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7Power-on hours for /dev/sda and /dev/sdf are 17289.Power-on hours for /dev/sdh is 983.Power-on hours for /dev/sdc and /dev/sdd is 22.Power-on hours for the remaining 4 is 16.So, obviously, this is completely insane. The drive will wear out long before the actual medium degrades at this rate.To stop this nonsense: Explicitly disable idle_b state on the drive via SeaChest (Windows) or openSeaChest (Linux, find it on GitHub) to stop the drives from parking the heads so aggressively: # openSeaChest_PowerControl -d all --idle_b disableArguments in favor of excessive head parking can be made, and I'm sure some people are very convinced by them.These people are wrong.If you're shoving a bunch of these in a rack and not jumping up and down while writing data to them, then there's no reason to park your heads this often.I will be buying more of these.
John Smith
Comentado en los Estados Unidos el 7 de enero de 2024
I've had a pair of these running in a NAS as backup for about 2 years so far, no significant issues - so I bought another, and then 6 more 40 days after that one.Of the initial 2, only one of them has any reallocated sectors at all. 13 total, over a 2 year lifespan under regular use. Very decent. As of 11AM EST today, all of these drives just completed a full-disk dd with no issues whatsoever (writing 1s) - and the drive with the 13 bad sectors has not found any more. Time will tell.Load_Cycle_Count increment is the only problem I have with these drives, and this is not an MDD problem.For the uninitiated, these are Seagate drives underwater. Seagate implements some proprietary power-saving - they are NOT compatible with APM (IE: via hdparm). So telling these drives to stop parking the heads via normal methods (ie: hdparm -B 255) will not work, which I did not notice for 2 years.As you can see, /dev/sda and /dev/sdf are the drives which were initially purchased. Over a 2 year lifespan, the load cycle count of /dev/sda and /dev/sdf is fairly extreme - and there are 11526 Load_Cycle increments in just 40 days for /dev/sdh:# for i in {a..i}; do echo /dev/sd$i && smartctl -a /dev/sd$i | grep Load_Cycle; done/dev/sda193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always - 206445/dev/sdb193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdc193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 11/dev/sdd193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 13/dev/sde193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdf193 Load_Cycle_Count 0x0032 013 013 000 Old_age Always - 175988/dev/sdg193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdh193 Load_Cycle_Count 0x0032 095 095 000 Old_age Always - 11526/dev/sdi193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7Power-on hours for /dev/sda and /dev/sdf are 17289.Power-on hours for /dev/sdh is 983.Power-on hours for /dev/sdc and /dev/sdd is 22.Power-on hours for the remaining 4 is 16.So, obviously, this is completely insane. The drive will wear out long before the actual medium degrades at this rate.To stop this nonsense: Explicitly disable idle_b state on the drive via SeaChest (Windows) or openSeaChest (Linux, find it on GitHub) to stop the drives from parking the heads so aggressively: # openSeaChest_PowerControl -d all --idle_b disableArguments in favor of excessive head parking can be made, and I'm sure some people are very convinced by them.These people are wrong.If you're shoving a bunch of these in a rack and not jumping up and down while writing data to them, then there's no reason to park your heads this often.I will be buying more of these.
John Smith
Comentado en los Estados Unidos el 7 de enero de 2024
I've had a pair of these running in a NAS as backup for about 2 years so far, no significant issues - so I bought another, and then 6 more 40 days after that one.Of the initial 2, only one of them has any reallocated sectors at all. 13 total, over a 2 year lifespan under regular use. Very decent. As of 11AM EST today, all of these drives just completed a full-disk dd with no issues whatsoever (writing 1s) - and the drive with the 13 bad sectors has not found any more. Time will tell.Load_Cycle_Count increment is the only problem I have with these drives, and this is not an MDD problem.For the uninitiated, these are Seagate drives underwater. Seagate implements some proprietary power-saving - they are NOT compatible with APM (IE: via hdparm). So telling these drives to stop parking the heads via normal methods (ie: hdparm -B 255) will not work, which I did not notice for 2 years.As you can see, /dev/sda and /dev/sdf are the drives which were initially purchased. Over a 2 year lifespan, the load cycle count of /dev/sda and /dev/sdf is fairly extreme - and there are 11526 Load_Cycle increments in just 40 days for /dev/sdh:# for i in {a..i}; do echo /dev/sd$i && smartctl -a /dev/sd$i | grep Load_Cycle; done/dev/sda193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always - 206445/dev/sdb193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdc193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 11/dev/sdd193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 13/dev/sde193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdf193 Load_Cycle_Count 0x0032 013 013 000 Old_age Always - 175988/dev/sdg193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdh193 Load_Cycle_Count 0x0032 095 095 000 Old_age Always - 11526/dev/sdi193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7Power-on hours for /dev/sda and /dev/sdf are 17289.Power-on hours for /dev/sdh is 983.Power-on hours for /dev/sdc and /dev/sdd is 22.Power-on hours for the remaining 4 is 16.So, obviously, this is completely insane. The drive will wear out long before the actual medium degrades at this rate.To stop this nonsense: Explicitly disable idle_b state on the drive via SeaChest (Windows) or openSeaChest (Linux, find it on GitHub) to stop the drives from parking the heads so aggressively: # openSeaChest_PowerControl -d all --idle_b disableArguments in favor of excessive head parking can be made, and I'm sure some people are very convinced by them.These people are wrong.If you're shoving a bunch of these in a rack and not jumping up and down while writing data to them, then there's no reason to park your heads this often.I will be buying more of these.
John Smith
Comentado en los Estados Unidos el 7 de enero de 2024
I've had a pair of these running in a NAS as backup for about 2 years so far, no significant issues - so I bought another, and then 6 more 40 days after that one.Of the initial 2, only one of them has any reallocated sectors at all. 13 total, over a 2 year lifespan under regular use. Very decent. As of 11AM EST today, all of these drives just completed a full-disk dd with no issues whatsoever (writing 1s) - and the drive with the 13 bad sectors has not found any more. Time will tell.Load_Cycle_Count increment is the only problem I have with these drives, and this is not an MDD problem.For the uninitiated, these are Seagate drives underwater. Seagate implements some proprietary power-saving - they are NOT compatible with APM (IE: via hdparm). So telling these drives to stop parking the heads via normal methods (ie: hdparm -B 255) will not work, which I did not notice for 2 years.As you can see, /dev/sda and /dev/sdf are the drives which were initially purchased. Over a 2 year lifespan, the load cycle count of /dev/sda and /dev/sdf is fairly extreme - and there are 11526 Load_Cycle increments in just 40 days for /dev/sdh:# for i in {a..i}; do echo /dev/sd$i && smartctl -a /dev/sd$i | grep Load_Cycle; done/dev/sda193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always - 206445/dev/sdb193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdc193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 11/dev/sdd193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 13/dev/sde193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdf193 Load_Cycle_Count 0x0032 013 013 000 Old_age Always - 175988/dev/sdg193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdh193 Load_Cycle_Count 0x0032 095 095 000 Old_age Always - 11526/dev/sdi193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7Power-on hours for /dev/sda and /dev/sdf are 17289.Power-on hours for /dev/sdh is 983.Power-on hours for /dev/sdc and /dev/sdd is 22.Power-on hours for the remaining 4 is 16.So, obviously, this is completely insane. The drive will wear out long before the actual medium degrades at this rate.To stop this nonsense: Explicitly disable idle_b state on the drive via SeaChest (Windows) or openSeaChest (Linux, find it on GitHub) to stop the drives from parking the heads so aggressively: # openSeaChest_PowerControl -d all --idle_b disableArguments in favor of excessive head parking can be made, and I'm sure some people are very convinced by them.These people are wrong.If you're shoving a bunch of these in a rack and not jumping up and down while writing data to them, then there's no reason to park your heads this often.I will be buying more of these.
John Smith
Comentado en los Estados Unidos el 7 de enero de 2024
I've had a pair of these running in a NAS as backup for about 2 years so far, no significant issues - so I bought another, and then 6 more 40 days after that one.Of the initial 2, only one of them has any reallocated sectors at all. 13 total, over a 2 year lifespan under regular use. Very decent. As of 11AM EST today, all of these drives just completed a full-disk dd with no issues whatsoever (writing 1s) - and the drive with the 13 bad sectors has not found any more. Time will tell.Load_Cycle_Count increment is the only problem I have with these drives, and this is not an MDD problem.For the uninitiated, these are Seagate drives underwater. Seagate implements some proprietary power-saving - they are NOT compatible with APM (IE: via hdparm). So telling these drives to stop parking the heads via normal methods (ie: hdparm -B 255) will not work, which I did not notice for 2 years.As you can see, /dev/sda and /dev/sdf are the drives which were initially purchased. Over a 2 year lifespan, the load cycle count of /dev/sda and /dev/sdf is fairly extreme - and there are 11526 Load_Cycle increments in just 40 days for /dev/sdh:# for i in {a..i}; do echo /dev/sd$i && smartctl -a /dev/sd$i | grep Load_Cycle; done/dev/sda193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always - 206445/dev/sdb193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdc193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 11/dev/sdd193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 13/dev/sde193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdf193 Load_Cycle_Count 0x0032 013 013 000 Old_age Always - 175988/dev/sdg193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7/dev/sdh193 Load_Cycle_Count 0x0032 095 095 000 Old_age Always - 11526/dev/sdi193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 7Power-on hours for /dev/sda and /dev/sdf are 17289.Power-on hours for /dev/sdh is 983.Power-on hours for /dev/sdc and /dev/sdd is 22.Power-on hours for the remaining 4 is 16.So, obviously, this is completely insane. The drive will wear out long before the actual medium degrades at this rate.To stop this nonsense: Explicitly disable idle_b state on the drive via SeaChest (Windows) or openSeaChest (Linux, find it on GitHub) to stop the drives from parking the heads so aggressively: # openSeaChest_PowerControl -d all --idle_b disableArguments in favor of excessive head parking can be made, and I'm sure some people are very convinced by them.These people are wrong.If you're shoving a bunch of these in a rack and not jumping up and down while writing data to them, then there's no reason to park your heads this often.I will be buying more of these.
Margarita Morales Cervantes
Comentado en México el 21 de junio de 2023
Accesorio perfecto en estos tiempos
Mario Noh
Comentado en México el 3 de mayo de 2023
Me agrado el diseño Y materiales con los que está hecho
jorge v.
Comentado en México el 28 de febrero de 2023
Cumple con lo ofrecido por el vendedor
Mike A.
Comentado en los Estados Unidos el 30 de julio de 2022
This hard drive was inexpensive compared to other drives of the same size and it's rated as a surveillance grade drive, which is what I wanted as I was installing it in a security DVR for continuous use. It's been in service for about a month now and works well, I can't speak to the longevity of the drive as I just got it, but so far it has worked well and the DVR has always played back video on demand and has not experienced any drive errors.
Mike A.
Comentado en los Estados Unidos el 30 de julio de 2022
This hard drive was inexpensive compared to other drives of the same size and it's rated as a surveillance grade drive, which is what I wanted as I was installing it in a security DVR for continuous use. It's been in service for about a month now and works well, I can't speak to the longevity of the drive as I just got it, but so far it has worked well and the DVR has always played back video on demand and has not experienced any drive errors.
Mike A.
Comentado en los Estados Unidos el 30 de julio de 2022
This hard drive was inexpensive compared to other drives of the same size and it's rated as a surveillance grade drive, which is what I wanted as I was installing it in a security DVR for continuous use. It's been in service for about a month now and works well, I can't speak to the longevity of the drive as I just got it, but so far it has worked well and the DVR has always played back video on demand and has not experienced any drive errors.
Mike A.
Comentado en los Estados Unidos el 30 de julio de 2022
This hard drive was inexpensive compared to other drives of the same size and it's rated as a surveillance grade drive, which is what I wanted as I was installing it in a security DVR for continuous use. It's been in service for about a month now and works well, I can't speak to the longevity of the drive as I just got it, but so far it has worked well and the DVR has always played back video on demand and has not experienced any drive errors.
Mike A.
Comentado en los Estados Unidos el 30 de julio de 2022
This hard drive was inexpensive compared to other drives of the same size and it's rated as a surveillance grade drive, which is what I wanted as I was installing it in a security DVR for continuous use. It's been in service for about a month now and works well, I can't speak to the longevity of the drive as I just got it, but so far it has worked well and the DVR has always played back video on demand and has not experienced any drive errors.
Mike A.
Comentado en los Estados Unidos el 30 de julio de 2022
This hard drive was inexpensive compared to other drives of the same size and it's rated as a surveillance grade drive, which is what I wanted as I was installing it in a security DVR for continuous use. It's been in service for about a month now and works well, I can't speak to the longevity of the drive as I just got it, but so far it has worked well and the DVR has always played back video on demand and has not experienced any drive errors.
Mike A.
Comentado en los Estados Unidos el 30 de julio de 2022
This hard drive was inexpensive compared to other drives of the same size and it's rated as a surveillance grade drive, which is what I wanted as I was installing it in a security DVR for continuous use. It's been in service for about a month now and works well, I can't speak to the longevity of the drive as I just got it, but so far it has worked well and the DVR has always played back video on demand and has not experienced any drive errors.
Mike A.
Comentado en los Estados Unidos el 30 de julio de 2022
This hard drive was inexpensive compared to other drives of the same size and it's rated as a surveillance grade drive, which is what I wanted as I was installing it in a security DVR for continuous use. It's been in service for about a month now and works well, I can't speak to the longevity of the drive as I just got it, but so far it has worked well and the DVR has always played back video on demand and has not experienced any drive errors.
Carlos Nevado
Comentado en México el 3 de abril de 2022
cumple a la perfección su función