This is true but only to a point. I have 64GB of RAM and I have seen Photoshop overshoot that and start eating up 20gb of page file. Working with the exact same files in Affinity Photo - it uses a quarter of that.
There is a difference between “Efficiently use available memory for program functions” and “Fill all available memory with bloat and poorly coded rubbish”
If your software’s function can be replicated using only 1/4 of system memory then your software is poorly written. Which Photoshop is.
The benefit of having unused RAM is that every program you are using can remain in memory for quick multitasking access and when you go to launch a new program it can be loaded into that unused RAM without unloading any of the currently running programs. What part about that is a misunderstanding? Would the user be better off if the application in focus aggressively reserved RAM it didn’t need to slow down every other running application?
This is only remotely true if you have a box dedicated to doing one single thing and nothing else. That is almost certainly not the case for the vast majority of Photoshop users
for whom? as a power user, I’d keep affinity photo or photoshop, maya, max, blender and godot/unity open at the same time. I DO NOT WANT PS EATING UP ALL THE RESOURCES. Affinity so far (only 4 months into it) has been a delight.
Consumer software running on a consumer OS should not be grabbing all available RAM just because. Doing so will cause other applications to be moved to swap and have to be loaded back into RAM when the user goes to use them. In a server environment doing something like running a SQL server it would make more sense to grab all available RAM and start aggressively caching frequently accessed data in RAM to present it sooner with the assumption that the server’s primary role is to perform SQL operations as quickly as possible.
Specifically with Photoshop what would be the benefit of it be aggressively reserving RAM beyond what is needed to function?
there is no OS-level, standardized, dynamic allocation of RAM (definitely not on windows, i assume it’s the same for OSX).
this is because most programming languages handle RAM allocation within the individual program, so the OS can’t allocate RAM however it wants.
the OS could put processes to “sleep”, but that’s basically just the previously mentioned swap memory and leads to HD degradation and poor performance/hiccups, which is why it’s not used much…
so, no.
RAM is usually NOT dynamically allocated by the OS.
it CAN be dynamically allocated by individual programs, IF they are written in a way that supports dynamic allocation of RAM, which some languages do well, others not so much…
it’s certainly not universally true.
also, what you describe when saying:
Any modern OS will allocate RAM as necessary. If another application needs, it will allocate some to it.
…is literally swap. that’s exactly what the previous user said.
and swap is not the same as “allocating RAM when a program needs it”, instead it’s the OS going “oh shit! I’m out of RAM and need more NOW, or I’m going to crash! better be safe and steal some memory from disk!”
what happens is:
the OS runs out of RAM and needs more, so it marks a portion of the next best HD as swap-RAM and starts using that instead.
HDs are not built for this use case, so whichever processes use the swap space become slooooooow and responsiveness suffers greatly.
on top of that, memory of any kind is built for a certain amount of read/write operations. this is also considered the “lifespan” of a memory component.
RAM is built for a LOT of (very fast) R/W operations.
hard drives are NOT built for that.
RAM has at least an order of magnitude more R/W ops going on than a hard drive, so when a computer uses swap excessively, instead of as very last resort as intended, it leads to a vastly shortened lifespan of the disk.
for an example of a VERY stupid, VERY poor implementation of this behavior, look up the apple M1’s rapid SSD degradation.
short summary:
apple only put 8GB of RAM into the first gen M1’s, which made the OS use swap memory almost continuously, which wore out the hard drive MUCH faster than expected.
…and since the HD is soldered onto the Mainboard, that completely bricks the device in about half a year/year, depending on usage.
TL;DR: you’re categorically and objectively wrong about this. sorry :/
Not using available ram only is true when doing so could offer performance benefits. Many applications can’t be sped up by using more ram. Using more ram for no obvious reason is stupid, especially on a machine that has to do other things at the same time.
Bad memory management can actually slow down applications significantly. Allocating memory is actually a fairly expensive operation. So much that high performance software actually uses a bunch of tricks to avoid extra allocations where possible. Additionally, accessing memory is actually kinda slow for a CPU, and the CPU often has to sit around for many clock cycles waiting for memory to be retrieved if it’s not in the CPU’s cache. If your main data can be stored more compactly, more of that data can fit in your CPU’s cache, reducing that idle time.
deleted by creator
Adobe can’t bother to fix it, they ended up adding a “Scratch Disk” aka virtual memory instead of fixing the problem.
I’m going to assume sarcasm, no?
deleted by creator
This is true but only to a point. I have 64GB of RAM and I have seen Photoshop overshoot that and start eating up 20gb of page file. Working with the exact same files in Affinity Photo - it uses a quarter of that.
There is a difference between “Efficiently use available memory for program functions” and “Fill all available memory with bloat and poorly coded rubbish”
If your software’s function can be replicated using only 1/4 of system memory then your software is poorly written. Which Photoshop is.
The benefit of having unused RAM is that every program you are using can remain in memory for quick multitasking access and when you go to launch a new program it can be loaded into that unused RAM without unloading any of the currently running programs. What part about that is a misunderstanding? Would the user be better off if the application in focus aggressively reserved RAM it didn’t need to slow down every other running application?
deleted by creator
This is only remotely true if you have a box dedicated to doing one single thing and nothing else. That is almost certainly not the case for the vast majority of Photoshop users
deleted by creator
for whom? as a power user, I’d keep affinity photo or photoshop, maya, max, blender and godot/unity open at the same time. I DO NOT WANT PS EATING UP ALL THE RESOURCES. Affinity so far (only 4 months into it) has been a delight.
deleted by creator
You speak from the perspective of someone who’s either always had enough RAM, or not enough work to do.
deleted by creator
Consumer software running on a consumer OS should not be grabbing all available RAM just because. Doing so will cause other applications to be moved to swap and have to be loaded back into RAM when the user goes to use them. In a server environment doing something like running a SQL server it would make more sense to grab all available RAM and start aggressively caching frequently accessed data in RAM to present it sooner with the assumption that the server’s primary role is to perform SQL operations as quickly as possible.
Specifically with Photoshop what would be the benefit of it be aggressively reserving RAM beyond what is needed to function?
deleted by creator
this is not true.
it entirely depends on the specific application.
there is no OS-level, standardized, dynamic allocation of RAM (definitely not on windows, i assume it’s the same for OSX).
this is because most programming languages handle RAM allocation within the individual program, so the OS can’t allocate RAM however it wants.
the OS could put processes to “sleep”, but that’s basically just the previously mentioned swap memory and leads to HD degradation and poor performance/hiccups, which is why it’s not used much…
so, no.
RAM is usually NOT dynamically allocated by the OS.
it CAN be dynamically allocated by individual programs, IF they are written in a way that supports dynamic allocation of RAM, which some languages do well, others not so much…
it’s certainly not universally true.
also, what you describe when saying:
…is literally swap. that’s exactly what the previous user said.
and swap is not the same as “allocating RAM when a program needs it”, instead it’s the OS going “oh shit! I’m out of RAM and need more NOW, or I’m going to crash! better be safe and steal some memory from disk!”
what happens is:
the OS runs out of RAM and needs more, so it marks a portion of the next best HD as swap-RAM and starts using that instead.
HDs are not built for this use case, so whichever processes use the swap space become slooooooow and responsiveness suffers greatly.
on top of that, memory of any kind is built for a certain amount of read/write operations. this is also considered the “lifespan” of a memory component.
RAM is built for a LOT of (very fast) R/W operations.
hard drives are NOT built for that.
RAM has at least an order of magnitude more R/W ops going on than a hard drive, so when a computer uses swap excessively, instead of as very last resort as intended, it leads to a vastly shortened lifespan of the disk.
for an example of a VERY stupid, VERY poor implementation of this behavior, look up the apple M1’s rapid SSD degradation.
short summary:
apple only put 8GB of RAM into the first gen M1’s, which made the OS use swap memory almost continuously, which wore out the hard drive MUCH faster than expected.
…and since the HD is soldered onto the Mainboard, that completely bricks the device in about half a year/year, depending on usage.
TL;DR: you’re categorically and objectively wrong about this. sorry :/
hope you found this explanation helpful tho!
we all need a little swap here and there, right
Not using available ram only is true when doing so could offer performance benefits. Many applications can’t be sped up by using more ram. Using more ram for no obvious reason is stupid, especially on a machine that has to do other things at the same time.
deleted by creator
Bad memory management can actually slow down applications significantly. Allocating memory is actually a fairly expensive operation. So much that high performance software actually uses a bunch of tricks to avoid extra allocations where possible. Additionally, accessing memory is actually kinda slow for a CPU, and the CPU often has to sit around for many clock cycles waiting for memory to be retrieved if it’s not in the CPU’s cache. If your main data can be stored more compactly, more of that data can fit in your CPU’s cache, reducing that idle time.
deleted by creator
Bad memory management includes allocating memory you aren’t actually making use of.
deleted by creator