Looking to build a new computer for Mastercam 2020/Camplete - Page 2
Close
Login to Your Account
Page 2 of 3 FirstFirst 123 LastLast
Results 21 to 40 of 48
  1. #21
    Join Date
    Jul 2018
    Country
    UNITED STATES
    State/Province
    California
    Posts
    137
    Post Thanks / Like
    Likes (Given)
    33
    Likes (Received)
    23

    Default

    Quote Originally Posted by goooose View Post
    As mentioned above, no calculation is done on the graphics card but I would still upgrade. Most likely you'll have more than just Mastercam running on the computer, if you have 70+ops displayed on the screen all at once....plus a plethora of other reasons, it will make the computer experience 'faster'.

    Some other things you could do in Mastercam, if you are confident your stock models will not change you could save them out as stl and then base the next operation on the saved stl. This is a tad risky in my mind though. Using stl is not available for all paths though. Another option, when possible use stock that is defined by a previous tool size. You could loosen the tolerance on your stock models specially when you're just roughing, try starting your stock models' stock as the previous stock model. This way you reduce the number of toolpaths calculated per stock model....thats all I can think of for now

    Oh...and do not buy a used video card from ebay. Many are getting out of the bitcoin game and dumping the videocards they were using...or just dumping clapped out cards altogether.

    I've never thought of saving a stock model as an stl, for the purpose of "removing toolpath history" if you will. That's a brilliant idea. Thanks.

    And yes I fully agree with you on the eBay card, I'm the kind of guy who will gladly spend the money for the right tool, and peace of mind.

  2. #22
    Join Date
    Jul 2018
    Country
    UNITED STATES
    State/Province
    California
    Posts
    137
    Post Thanks / Like
    Likes (Given)
    33
    Likes (Received)
    23

    Default

    Quote Originally Posted by gregormarwick View Post
    Regarding memory speed, on DDR4 on intel, it doesn't make much difference in real world applications. I would suggest that you will be disappointed if you install faster ram with great expectations.

    Keep and eye on CPU load when it's struggling, probably you will have one core that is under load and the rest idling - if the one core that is loaded is 100% pegged all the time then faster ram is unlikely to make any difference whatsoever. If it's hovering or jumping around 50-75% while the rest are idling, then it is almost certainly IO bottlenecked, in which case faster ram might help, but more cache would likely help a lot more.

    I am about to build a new workstation for cad/cam too, and it will be a Ryzen 3700X - single core IPC is within MOE against Intel's best and it has twice as much level 3 cache. Ryzen CPU's are much more sensitive to memory speed than Intel, but I will also be using slow (2666Mhz) ram, because I will use ECC which is not available in fast speeds. I made this decision based on my less than stellar experiences with DDR4 stability.*

    Regarding the GPU, yes the P600 is an absolute turd of a GPU, but as you have observed it does very little of anything in a cam workstation. Fact is, ALL of the low-mid range Quadro cards are dog slow GPU's. The equivalent AMD workstation cards offer more bang for buck, but it really doesn't matter much. I have a Quadro P2000 and a Radeon Pro 5100 in identical workstations, the 5100 is noticeably faster when modelling, but neither of them are under any load whatsoever when calculating toolpaths, so who cares.

    *Important to note that Intel do not support ECC ram unless you buy a Xeon, which was a real driver for choosing AMD for this build.
    Thanks for the reply, this is helpful... I think. I didn't understand a lot of it.

    So, my CPU is running at about 60-75 percent during toolpath generation, would it be safe to say faster ram would help, even marginally? If I spend $300 and get even a 5 percent increase in productivity I think it would be worth it.

    "More cache would help more" what does that mean, and who do I give my money to?

  3. #23
    Join Date
    Feb 2007
    Location
    Aberdeen, UK
    Posts
    3,614
    Post Thanks / Like
    Likes (Given)
    1250
    Likes (Received)
    1382

    Default

    Quote Originally Posted by BRIAN.T View Post
    Thanks for the reply, this is helpful... I think. I didn't understand a lot of it.

    So, my CPU is running at about 60-75 percent during toolpath generation, would it be safe to say faster ram would help, even marginally? If I spend $300 and get even a 5 percent increase in productivity I think it would be worth it.

    "More cache would help more" what does that mean, and who do I give my money to?
    Firstly, it's critical to make the distinction between per core load, and total cpu load when analysing this type of thing.

    Take a look at this random screenshot I pulled off the internet:



    If you do not see this in your CPU tab, right click on the graph and select Change Graph To > Logical Processors.

    Now, when calculating toolpaths it is most likely that you will see one, or maybe a couple, with much higher load than the rest. The number that show load will depend on how well your cam software can utilise multiple threads. Most don't do very well at that, so you get one or two, or three that are loaded.

    Watch how they are loaded. If the most heavily loaded one is erratic and not pegged at 100%, then you have a situation where faster memory might make a difference. Emphasis on MIGHT!

    If the most heavily loaded one is pegged at 100% steadily, then you'll be wasting your money.

    Regarding cache:

    Cache is an intermediate storage on the CPU itself where it keeps the most frequently used data so that it doesn't have to go back to main memory every time to retrieve it. There are levels.

    One reasonable analogy that I can think of is the toolchanger in your mill. Beside your mill you have a bench with some tool carriers, and way over on the other side of the workshop you have your tool crib.

    When you're running a job, the tools used most frequently are in the carousel. This is analogous to the L1/L2 cache. When the carousel is full but you still need more tools, you start swapping them out by hand from the tool carrier on your bench. It's a bit slower, but not horribly so. This is analogous to the L3 cache. When there's no more room on the bench, but you still need more tools, you have to walk all the way over to the tool crib to get what you want. This is massively slower, and is analogous to having to go back to main memory when the cache is full.

    The cache specifications of your 7700K are as follows:

    L1 256Kb
    L2 1Mb
    L3 8Mb

    For comparison, the Ryzen 3700X/3800X has the following:

    L1 512Kb
    L2 4Mb
    L3 32Mb

    And Intels current 9900KS:

    L1 512Kb
    L2 2Mb
    L3 16Mb

    Coupled with the fact that AMD support ECC memory in this market segment it's a no brainer. There is a reason why AMD are absolutely dominating Intel right now, the only exception being games, which still tend to perform marginally better on Intel. And that gap is narrowing the whole time.

    Note however, that I am NOT advocating that you should run out and build an AMD system! All software is different, and I don't use Mastercam so I can't give you any real world data on how it performs on Ryzen cpus, nor do I know how heavily it relies on cache bandwidth/capacity. This entire post should be considered in general terms, not specific.

  4. Likes DavidScott liked this post
  5. #24
    Join Date
    Jul 2018
    Country
    UNITED STATES
    State/Province
    California
    Posts
    137
    Post Thanks / Like
    Likes (Given)
    33
    Likes (Received)
    23

    Default

    Quote Originally Posted by gregormarwick View Post

    Now, when calculating toolpaths it is most likely that you will see one, or maybe a couple, with much higher load than the rest. The number that show load will depend on how well your cam software can utilise multiple threads. Most don't do very well at that, so you get one or two, or three that are loaded.

    Watch how they are loaded. If the most heavily loaded one is erratic and not pegged at 100%, then you have a situation where faster memory might make a difference. Emphasis on MIGHT!

    If the most heavily loaded one is pegged at 100% steadily, then you'll be wasting your money.

    Regarding cache:
    This was extremely helpful, I very much appreciate it. As for my results, for my first test I programmed an outrageously complex path, every core shot to 100 percent and stayed there. I had to force quit that one.

    My second test i regenerated one reasonably complicated path, and I see each logical processor (core?) kick up to about 90 percent, they each jump around between about 90-50 percent. Would one consider that erratic? I'm also surprised to see each one working almost in unison, I've always thought mastercam only tool advantage of one core at a time. Perhaps I just don't understand what I'm looking at. However based on what you've told me, perhaps faster ram might help, but more cache will Definitely help, correct? My CPU isn't necessarily maxing out as far as what it can do, it just isn't able to do it fast enough, if I'm understanding correctly.

  6. #25
    Join Date
    Jul 2012
    Country
    UNITED STATES
    State/Province
    Washington
    Posts
    2,932
    Post Thanks / Like
    Likes (Given)
    1136
    Likes (Received)
    1185

    Default

    It looks like your CPU is maxed out and depending on how good your cooler is that could be holding it back if your CPU is getting over 60C. I doubt faster ram would make a measurable difference, yours is pretty fast already. I agree with everything gregormarwick says and would like to add a few things. Take a look at AMD Threadripper CPUs, they are quad-channel ram and also support PCIe 4.0, Intel doesn't support PCIe 4.0 yet. Also, check out SSDs that use the PCIe slot vs SATA ports, they are typically 4-5 times faster with little change in cost, not that it should make a difference in creating code. Keep in mind the more items you plug into your PCIe slots the more total lanes you are going to need so make sure your CPU and motherboard will support them.

    Do you use CPU resources on your network or just on your single computer? This is something Mastercam is supposed to be capable of.

  7. #26
    Join Date
    Feb 2007
    Location
    Aberdeen, UK
    Posts
    3,614
    Post Thanks / Like
    Likes (Given)
    1250
    Likes (Received)
    1382

    Default

    Quote Originally Posted by BRIAN.T View Post
    This was extremely helpful, I very much appreciate it. As for my results, for my first test I programmed an outrageously complex path, every core shot to 100 percent and stayed there. I had to force quit that one.

    My second test i regenerated one reasonably complicated path, and I see each logical processor (core?) kick up to about 90 percent, they each jump around between about 90-50 percent. Would one consider that erratic? I'm also surprised to see each one working almost in unison, I've always thought mastercam only tool advantage of one core at a time. Perhaps I just don't understand what I'm looking at. However based on what you've told me, perhaps faster ram might help, but more cache will Definitely help, correct? My CPU isn't necessarily maxing out as far as what it can do, it just isn't able to do it fast enough, if I'm understanding correctly.
    That isn't a result I expected to be entirely honest! So it would appear that MC is capable of actually properly utilising multiple threads - that puts a different perspective on things.

    Jumping between 90 and 50% when actually working on multiple cores is much more of a gray area than when considering one core alone. In a typical multithreaded process everything is still ultimately serial/linear - when one thread completes it often has to wait on some other one to complete before it can continue. How efficiently this can be scheduled dictates how well the cpu will be utilised, and is most likely the reason for the irregular load during regen.

    Quote Originally Posted by DavidScott View Post
    It looks like your CPU is maxed out and depending on how good your cooler is that could be holding it back if your CPU is getting over 60C. I doubt faster ram would make a measurable difference, yours is pretty fast already. I agree with everything gregormarwick says and would like to add a few things. Take a look at AMD Threadripper CPUs, they are quad-channel ram and also support PCIe 4.0, Intel doesn't support PCIe 4.0 yet. Also, check out SSDs that use the PCIe slot vs SATA ports, they are typically 4-5 times faster with little change in cost, not that it should make a difference in creating code. Keep in mind the more items you plug into your PCIe slots the more total lanes you are going to need so make sure your CPU and motherboard will support them.

    Do you use CPU resources on your network or just on your single computer? This is something Mastercam is supposed to be capable of.
    Since it appears that MC can utilise multiple cores effectively, it could well be worth looking at higher end platforms like Threadripper. As I mentioned previously, AMD are absolutely making a fool of Intel in the high performance high core count sector right now.

    However, it's important to know how many cores MC will utilise if they are available. Right now on your 7700 you have 4 physical (real) cores but you see 8 in the task manager because of hyperthreading. Hyperthreading is Intel's implementation of SMT (Simultaneous multithreading) - this is a bit much to go into here, but you can google it yourself if you're interested. All you really need to know is that real cores are better than logical cores, also that Hyperthreading is the root of effectively all of Intel's recent high profile security flaws.

    A Ryzen 3800X from AMD, and a 9900K from Intel both have 8 physical cores / 16 logical, although the Ryzen has twice as much cache as I mentioned earlier.

    The current top of the range threadripper is the 3970X which has 32 physical cores / 64 logical. It also has 128MB of L3 cache, and as David mentioned supports quad channel memory. In the tool changer analogy I made up earlier, quad channel memory is like having twice as many people tripping back and forth to the tool crib.

    Intel of course have processors in the same market segment, but honestly they are a joke right now compared to AMD, to the extent that they're not even worth discussing.

  8. #27
    Join Date
    Apr 2018
    Country
    UNITED KINGDOM
    Posts
    2,546
    Post Thanks / Like
    Likes (Given)
    0
    Likes (Received)
    1202

    Default

    Quote Originally Posted by BRIAN.T View Post
    This was extremely helpful, I very much appreciate it. As for my results, for my first test I programmed an outrageously complex path, every core shot to 100 percent and stayed there. I had to force quit that one.
    Could you toss up a screenshot that's representative of what you are doing ? This sounds like something is really wrong .... I've seen pretty complex stuff done in Smurfcam on much older boxes without trouble, we did fairly swoopy stuff in Cimatron on Intel (core duo ?) ten years ago, and my Wildfire runs on a dual 800 mips machine with no problem. Ja, it's a little slow but not like you describe.

    Are you on maintenance ? if so, sounds like what those camsters ought to be figuring out for you

  9. #28
    Join Date
    Nov 2014
    Country
    UNITED STATES
    State/Province
    Florida
    Posts
    3,944
    Post Thanks / Like
    Likes (Given)
    1590
    Likes (Received)
    1839

    Default

    A couple of thoughts, although not sure how relevant they are..

    Does 2020 still have a ram saver application? I use that sometimes when I am using lots of stock models and it appears to give a slight boost to performance. Also, as some others have said, loosen up your tolerances for stock models , until you are really trying to determine the itsy bitsy leftovers and making sure you are getting what you need. I sometimes use stock model starting at .01" resolution, and toolpaths and profiles at .005" This will make a much smaller, albeit, less accurate stock model for simple visual verification.

  10. #29
    Join Date
    Jun 2012
    Location
    Michigan
    Posts
    4,730
    Post Thanks / Like
    Likes (Given)
    4336
    Likes (Received)
    2877

    Default

    Quote Originally Posted by Mike1974 View Post
    Does 2020 still have a ram saver application?
    It's been renamed to Repair File but still functions the same.

  11. #30
    Join Date
    Jun 2012
    Location
    Michigan
    Posts
    4,730
    Post Thanks / Like
    Likes (Given)
    4336
    Likes (Received)
    2877

    Default

    Quote Originally Posted by BRIAN.T View Post
    That's actually great advice, I honestly don't know anything about ram speed, so I'll do some research and swap it out. I also assumed I should have gone with Xeon, glad you're telling me otherwise!
    A Xeon just isn't suited for Mastercam. The term "workstation" is misleading for us Mcam users, yes your typical workstation will most likely have a Xeon but then again your typical workstation is making models, graphic design, animation, etc... That's where the Xeon (or even Ryzen) would shine.

    But Intel is coming out with some whopper multi-core cpu's, so who knows.

  12. Likes mhajicek liked this post
  13. #31
    Join Date
    May 2017
    Country
    UNITED STATES
    State/Province
    Minnesota
    Posts
    1,086
    Post Thanks / Like
    Likes (Given)
    1338
    Likes (Received)
    730

    Default

    This is my CPU selector for Mastercam purposes, since with my workflow Mastercam very rarely wants to use more than four threads:

    PassMark CPU Benchmarks - Single Thread Performance

  14. #32
    Join Date
    Jul 2012
    Country
    UNITED STATES
    State/Province
    Washington
    Posts
    2,932
    Post Thanks / Like
    Likes (Given)
    1136
    Likes (Received)
    1185

    Default

    Those benchmarks really show AMD is way ahead of Intel. If you look into more details than the core performance then AMD just comes out further ahead.

    Intel needs to come out with some better processors but AMDs next is a 64 core Threadripper, but its base clock is 3.0 GHz.

  15. #33
    Join Date
    Apr 2018
    Country
    UNITED KINGDOM
    Posts
    2,546
    Post Thanks / Like
    Likes (Given)
    0
    Likes (Received)
    1202

    Default

    Quote Originally Posted by DavidScott View Post
    AMDs next is a 64 core Threadripper, but its base clock is 3.0 GHz.
    Unless you are running a server, that is pointless. A single workstation can't even keep four processors busy, much less 64. Add in that most software is written very poorly from a threading standpoint and you have a giant waste -- especially running on Windows which sucks dead donkey balls smp-wise.

    mhajicek has it right, choose by single thread performance 'cuz that's what these programs run as.

  16. Likes carbonbl, Mtndew, empwoer liked this post
  17. #34
    Join Date
    Feb 2007
    Location
    Aberdeen, UK
    Posts
    3,614
    Post Thanks / Like
    Likes (Given)
    1250
    Likes (Received)
    1382

    Default

    Quote Originally Posted by EmanuelGoldstein View Post
    Unless you are running a server, that is pointless. A single workstation can't even keep four processors busy, much less 64. Add in that most software is written very poorly from a threading standpoint and you have a giant waste -- especially running on Windows which sucks dead donkey balls smp-wise.

    mhajicek has it right, choose by single thread performance 'cuz that's what these programs run as.
    These are workstation CPUs, just not the kind of work we do.

    Threadripper and the i9 sit in what has become known as the HEDT (high end desktop) sector, typically occupied by those doing audiovisual / content creation where the cores do count.

    I'd hesitate to call Intel's HEDT offerings true workstation CPUs though, as they don't support ECC memory as I mentioned earlier.

    I'd normally agree with you that cad/cam software doesn't benefit much from core counts greater than four (or eight if you multitask to some degree), but OP has witnessed his specific operations fully utilising eight threads. Tests need to be done to determine how many threads it will use, until then suggesting that he shouldn't care about core count is speculation.

    Quote Originally Posted by DavidScott View Post
    Those benchmarks really show AMD is way ahead of Intel. If you look into more details than the core performance then AMD just comes out further ahead.

    Intel needs to come out with some better processors but AMDs next is a 64 core Threadripper, but its base clock is 3.0 GHz.
    Most people don't realise that Intel is in a bad way.

    They obviously have the resources and the stashed funds to weather a very big storm, but the fact is they're a few years into this already. The endless difficulties with their 10nm process have left them crippled, years behind their original roadmap, and their mid-range and high end are still on their 14nm++ node.

    Their extant architecture is aging and plagued with security flaws due to their tunnel vision on performance and perception of being untouchable for the last decade. Security flaws that have cost them a lot of damage to their reputation, and a significant amount of performance due to mitigations.

    Endless minor incremental improvements have left them stagnant, and they have nothing up their sleeve to rival AMD. They have no answer to AMD's chiplet model, and in desperation are throwing ever larger monolithic dies into the fray in an effort to compete, but these are expensive to manufacture due to yields and as the core count increases the power-performance tanks hard. The rumoured upcoming i9-10990XE, if extrapolated from the current 10980XE, will draw 800W at 5GHz, in order to compete with AMD's 3990X, which is a 180W CPU!

    They have had a lot of entrenched customers in the server space - along with some dubious business practices to retain them - that has kept their coffers full for many years, but now that AMD's Epyc has become impossible to ignore they are losing ground there too at an ever accelerating pace.

    And to top it all off, Apple, their biggest OEM customer, is rumoured (and this is practically a given at this stage, with Google and Microsoft already doing the same) to be pushing their own ARM based CPUs out across their entire product range over the coming years and phasing out their dependance on x64 altogether. Worth noting that Apple's current A13 mobile CPU performs similarly to a mid range desktop i5 while consuming a tiny fraction of the power. If that can be scaled up effectively to desktop level TDP then they will be extremely powerful CPUs indeed.

  18. Likes TeachMePlease, SexieWASD liked this post
  19. #35
    Join Date
    Jul 2012
    Country
    UNITED STATES
    State/Province
    Washington
    Posts
    2,932
    Post Thanks / Like
    Likes (Given)
    1136
    Likes (Received)
    1185

    Default

    Quote Originally Posted by EmanuelGoldstein View Post
    Unless you are running a server, that is pointless. A single workstation can't even keep four processors busy, much less 64. Add in that most software is written very poorly from a threading standpoint and you have a giant waste -- especially running on Windows which sucks dead donkey balls smp-wise.

    mhajicek has it right, choose by single thread performance 'cuz that's what these programs run as.
    I figured $2500 for a CPU is probably a deal killer anyway so more of a response to Intel coming out with CPUs with higher core counts.

    If Mastercam can use processing resources from other computers on the network I would hope they could make good use of the CPU on the home computer no matter how many cores it has.

  20. #36
    Join Date
    May 2017
    Country
    UNITED STATES
    State/Province
    Minnesota
    Posts
    1,086
    Post Thanks / Like
    Likes (Given)
    1338
    Likes (Received)
    730

    Default

    I know a guy who several years ago had Mastercam on a VM that was load shared across three computers. Got a modest performance boost from it.

  21. #37
    Join Date
    Apr 2018
    Country
    UNITED KINGDOM
    Posts
    2,546
    Post Thanks / Like
    Likes (Given)
    0
    Likes (Received)
    1202

    Default

    Quote Originally Posted by gregormarwick View Post
    Threadripper and the i9 sit in what has become known as the HEDT (high end desktop) sector, typically occupied by those doing audiovisual / content creation where the cores do count.
    Not so much ... Inferno is still the hottest of the hot, it ran fine on four slow mips cpu's, the bottleneck was getting the data in and out. You can run Inferno on a notso-spectacular machine but you gotta have the Stone and Wire ungodly fast disk transfer to make it work. If you can't feed the damn things, they are just sitting there running 0's down the pipe. I bet a lot of this talk is pure marketing. Did you ever buy an Overdrive processor ? Want to talk disappointment ?

    I'd normally agree with you that cad/cam software doesn't benefit much from core counts greater than four (or eight if you multitask to some degree), but OP has witnessed his specific operations fully utilising eight threads.
    I have serious doubts about his observation. First off, you have to feed those cpu's. Even today's memory won't keep eight processors loaded like that. Second, the task itself has to be amenable to parallelization. Not many jobs in cadcam are that way. Third, the Mastercam programmers would have to be damn good. Better than the guys at CERN ... I have serious doubts. Fourth, the operating system needs to be smp-optimized for this type of work, not even for server-type workloads. OS/2 was, BeOS was, Irix was to a certain extent, but Windows ? Not a chance. (Apple also sucks the big one, by the way. Talk about disappointing, frigging 2020 and the bloody Finder still hangs the whole system. Stewpid.)

    I'd be surprised if Mastercam could actually use anything beyond two processors.

    Quote Originally Posted by DavidScott View Post
    If Mastercam can use processing resources from other computers on the network I would hope they could make good use of the CPU on the home computer no matter how many cores it has.
    Yeah, this is weird ... even gigabit ethernet is way slower than the local memory bus. Look at a Numalink cable, which actually works, and the thing is an inch anda half thick, not just six piddly wires running through a commodity circuit. The whole thing of "processing resources on other computers" may work for stuff like bitcoin mining but for cadcam ? over ethernet ? C'mon ...

    I know a guy who several years ago had Mastercam on a VM that was load shared across three computers. Got a modest performance boost from it.
    Yes, again, something is messed up. Ethernet and virtual machines speed it up ? My deskside is Numaflex and even then you want to run the code on the most local set of processors. If I look, there's very few things that can really use more than 2p*.

    It would be interesting to get some input from people with realworld hpc experience here. What they say is often not what you'd think ... but I have learned that throwing more cpu's at it is not the answer. For effective smp the problem has to be suitable for parallelization, the operating system has to do threads extremely well, the application has to be written extremely well. You don't see any one of those very often - to get all three together in a consumer-type setting is blue moon territory.

    *Image manipulation is an exception, per Gregor's remarks above, as you can split a screen into many pieces and give each p one piece. But even there there is a balance, as the splitting and distributing then recomposing takes resources, so it's not just a matter of slice and dice to go faster. But you can't do that when creating a toolpath because you won't know where the tool is for part three until the cutter gets past parts one and two. It's sequential, so you're stuck.

  22. Likes empwoer liked this post
  23. #38
    Join Date
    Feb 2007
    Location
    Aberdeen, UK
    Posts
    3,614
    Post Thanks / Like
    Likes (Given)
    1250
    Likes (Received)
    1382

    Default

    Quote Originally Posted by EmanuelGoldstein View Post
    Not so much ... Inferno is still the hottest of the hot, it ran fine on four slow mips cpu's, the bottleneck was getting the data in and out. You can run Inferno on a notso-spectacular machine but you gotta have the Stone and Wire ungodly fast disk transfer to make it work. If you can't feed the damn things, they are just sitting there running 0's down the pipe. I bet a lot of this talk is pure marketing. Did you ever buy an Overdrive processor ? Want to talk disappointment ?


    I have serious doubts about his observation. First off, you have to feed those cpu's. Even today's memory won't keep eight processors loaded like that. Second, the task itself has to be amenable to parallelization. Not many jobs in cadcam are that way. Third, the Mastercam programmers would have to be damn good. Better than the guys at CERN ... I have serious doubts. Fourth, the operating system needs to be smp-optimized for this type of work, not even for server-type workloads. OS/2 was, BeOS was, Irix was to a certain extent, but Windows ? Not a chance. (Apple also sucks the big one, by the way. Talk about disappointing, frigging 2020 and the bloody Finder still hangs the whole system. Stewpid.)

    I'd be surprised if Mastercam could actually use anything beyond two processors.


    Yeah, this is weird ... even gigabit ethernet is way slower than the local memory bus. Look at a Numalink cable, which actually works, and the thing is an inch anda half thick, not just six piddly wires running through a commodity circuit. The whole thing of "processing resources on other computers" may work for stuff like bitcoin mining but for cadcam ? over ethernet ? C'mon ...


    Yes, again, something is messed up. Ethernet and virtual machines speed it up ? My deskside is Numaflex and even then you want to run the code on the most local set of processors. If I look, there's very few things that can really use more than 2p*.

    It would be interesting to get some input from people with realworld hpc experience here. What they say is often not what you'd think ... but I have learned that throwing more cpu's at it is not the answer. For effective smp the problem has to be suitable for parallelization, the operating system has to do threads extremely well, the application has to be written extremely well. You don't see any one of those very often - to get all three together in a consumer-type setting is blue moon territory.

    *Image manipulation is an exception, per Gregor's remarks above, as you can split a screen into many pieces and give each p one piece. But even there there is a balance, as the splitting and distributing then recomposing takes resources, so it's not just a matter of slice and dice to go faster. But you can't do that when creating a toolpath because you won't know where the tool is for part three until the cutter gets past parts one and two. It's sequential, so you're stuck.
    Generally when a post contains this much nonsense I only respond to it if there's a chance of any of it getting disseminated as fact. It honestly reads like your knowledge of computers ends abruptly at around the year 2k and everything else is just invention. Very similar to your claims about APT in the other thread.

    The bandwidth of DDR4 is about 15GB/s per channel at base frequency. On my home PC with 3600MHz DDR4 it's closer to 30GB/s/channel. There are any amount of contemporary workloads that are fully able to keep all the cores of a current high core count CPU saturated, so evidently it's enough.

    You don't need to be employed at Cern to create efficiently threaded software. All modern OS api's provide ample functionality for that. Even the threading in the C++ standard library is efficient and easy to implement. It's not even close to as difficult as it was 20 years ago. Figuring out how to actually parallelise your computations is the challenge, not the implementing of it, and the knowledge pool of how to do too that is vastly deeper now than it was a couple of decades ago.

    All modern operating systems are competent at multiprocessing. Even Windows, albeit with some flaws. Apple, contrary to what you claim, have probably the best implementation of a thread dispatcher in any current mainstream OS in the form of Grand Central Dispatch, which is crazy efficient when software is written to properly utilise it.

    Regarding ethernet/numalink - Distributed compute does not treat ethernet as a CPU bus obviously... Some data to be processed is bundled up in a packet and sent wholesale to the remote computer where a resident process works on it and sends the results back. This is not a new thing by any stretch so idk why you think this is weird.

    There are plenty of toolpaths that can be parallelised, parallel raster for instance being a trivial example, waterline/z level also where each vertical pass can be processed independently and link moves considered later. Any patterned geometry can be parsed in parallel. Only things like non-uniform surface spiral finishing where every move is unique are actually impossible to be parallelised.

    I generally agree that cad/cam developers have been slow to implement real multithreading, but the fact is they all do to some degree nowadays. No reason to cast aspersions towards op's observations, what he stated is perfectly possible.

  24. #39
    Join Date
    Jul 2018
    Country
    UNITED STATES
    State/Province
    California
    Posts
    137
    Post Thanks / Like
    Likes (Given)
    33
    Likes (Received)
    23

    Default

    Quote Originally Posted by EmanuelGoldstein View Post
    Could you toss up a screenshot that's representative of what you are doing ? This sounds like something is really wrong .... I've seen pretty complex stuff done in Smurfcam on much older boxes without trouble, we did fairly swoopy stuff in Cimatron on Intel (core duo ?) ten years ago, and my Wildfire runs on a dual 800 mips machine with no problem. Ja, it's a little slow but not like you describe.

    Are you on maintenance ? if so, sounds like what those camsters ought to be figuring out for you
    Sorry for the delay in response, I've been very busy at work. I'll get a screen shot tonight. I will say as far as age goes, at least in my experience, every year mastercam requires more computing power regardless of complexity. 2018 ran better on my computer then 2020. I'm not sure what point I'm making. End of comment!

  25. #40
    Join Date
    Jun 2012
    Location
    Michigan
    Posts
    4,730
    Post Thanks / Like
    Likes (Given)
    4336
    Likes (Received)
    2877

    Default

    Quote Originally Posted by BRIAN.T View Post
    every year mastercam requires more computing power regardless of complexity. 2018 ran better on my computer then 2020. I'm not sure what point I'm making. End of comment!
    That hasn't been my experience to be honest. 2018,19,20 and even 2021(beta) seem to all run the same performance-wise in fact I would say that 2020 is better than previous versions when calculating toolpaths.

    One thing that some people overlook and can affect performance is Windows. It can/does get cluttered up over time and if you have the time or patience, you can do a windows refresh or even a full clean install of Windows. I've done this a couple of times and the difference is night and day.

  26. Likes gregormarwick liked this post

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •