Working through the single responsibility principle (SRP) in Python when calls are expensive The 2019 Stack Overflow Developer Survey Results Are In Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)Is micro-optimisation important when coding?Is SRP (Single Responsibility Principle) objective?Single Responsibility Principle ImplementationSingle Responsibility Principle: Responsibility unknownObject oriented vs vector based programmingEnums and single responsibility principle (SRP)When using the Single Responsibility Principle, what constitutes a “responsibility?”Single Responsibility Principle Violation?Understanding Single Responsibility Pattern (SRP)Confusion on Single Responsibility Principle (SRP) with modem example?Problem understanding the Single Responsibility Principle

I could not break this equation. Please help me

How should I replace vector<uint8_t>::const_iterator in an API?

What force causes entropy to increase?

Is it ok to offer lower paid work as a trial period before negotiating for a full-time job?

First use of “packing” as in carrying a gun

What is this lever in Argentinian toilets?

What was the last x86 CPU that did not have the x87 floating-point unit built in?

Did the new image of black hole confirm the general theory of relativity?

Can a novice safely splice in wire to lengthen 5V charging cable?

Why can't devices on different VLANs, but on the same subnet, communicate?

What LEGO pieces have "real-world" functionality?

ELI5: Why do they say that Israel would have been the fourth country to land a spacecraft on the Moon and why do they call it low cost?

How to stretch delimiters to envolve matrices inside of a kbordermatrix?

Sort a list of pairs representing an acyclic, partial automorphism

Why does the Event Horizon Telescope (EHT) not include telescopes from Africa, Asia or Australia?

How is simplicity better than precision and clarity in prose?

Am I ethically obligated to go into work on an off day if the reason is sudden?

How do I add random spotting to the same face in cycles?

How did passengers keep warm on sail ships?

Is every episode of "Where are my Pants?" identical?

Windows 10: How to Lock (not sleep) laptop on lid close?

How can I protect witches in combat who wear limited clothing?

What are these Gizmos at Izaña Atmospheric Research Center in Spain?

Single author papers against my advisor's will?



Working through the single responsibility principle (SRP) in Python when calls are expensive



The 2019 Stack Overflow Developer Survey Results Are In
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)Is micro-optimisation important when coding?Is SRP (Single Responsibility Principle) objective?Single Responsibility Principle ImplementationSingle Responsibility Principle: Responsibility unknownObject oriented vs vector based programmingEnums and single responsibility principle (SRP)When using the Single Responsibility Principle, what constitutes a “responsibility?”Single Responsibility Principle Violation?Understanding Single Responsibility Pattern (SRP)Confusion on Single Responsibility Principle (SRP) with modem example?Problem understanding the Single Responsibility Principle



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








7















Some base points:




  • Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough, breaking down Python code has negative impact besides readability and reuse (which is a big gain for developers, not so much for users).

  • The single responsibility principle (SRP) keeps code readable, is easier to test and maintain.

  • The project has a special kind of background where we want readable code, tests, and time performance.

For instance, code like this which invokes several methods (x4) is slower than the following one which is just one.



from operator import add

class Vector:
def __init__(self,list_of_3):
self.coordinates = list_of_3

def move(self,movement):
self.coordinates = list( map(add, self.coordinates, movement))
return self.coordinates

def revert(self):
self.coordinates = self.coordinates[::-1]
return self.coordinates

def get_coordinates(self):
return self.coordinates

## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates()


In comparison to this:



from operator import add

def move_and_revert_and_return(vector,movement):
return list( map(add, vector, movement) )[::-1]

move_and_revert_and_return([1,2,3],[1,1,1])


If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?



Are there workarounds, like some sort of pre-processor that puts things in-line for release?



Or is Python simply poor at handling code breakdown altogether?










share|improve this question



















  • 2





    Possible duplicate of Is micro-optimisation important when coding?

    – gnat
    yesterday






  • 15





    For what it's worth, your two code examples do not differ in number of responsibilities. The SRP is not a method counting exercise.

    – Robert Harvey
    yesterday






  • 1





    @RobertHarvey You're right, sorry for the poor example and I'll edit a better one when I have the time. In either case, readability and maintanability suffers and eventually the SRP breaks down within the codebase as we cut down on classes and their methods.

    – lucasgcb
    yesterday







  • 2





    note that function calls are expensive in any language, though AOT compilers have the luxury of inlining

    – Eevee
    yesterday






  • 3





    Use a JITted implementation of python such as PyPy. Should mostly fix this problem.

    – Bakuriu
    yesterday

















7















Some base points:




  • Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough, breaking down Python code has negative impact besides readability and reuse (which is a big gain for developers, not so much for users).

  • The single responsibility principle (SRP) keeps code readable, is easier to test and maintain.

  • The project has a special kind of background where we want readable code, tests, and time performance.

For instance, code like this which invokes several methods (x4) is slower than the following one which is just one.



from operator import add

class Vector:
def __init__(self,list_of_3):
self.coordinates = list_of_3

def move(self,movement):
self.coordinates = list( map(add, self.coordinates, movement))
return self.coordinates

def revert(self):
self.coordinates = self.coordinates[::-1]
return self.coordinates

def get_coordinates(self):
return self.coordinates

## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates()


In comparison to this:



from operator import add

def move_and_revert_and_return(vector,movement):
return list( map(add, vector, movement) )[::-1]

move_and_revert_and_return([1,2,3],[1,1,1])


If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?



Are there workarounds, like some sort of pre-processor that puts things in-line for release?



Or is Python simply poor at handling code breakdown altogether?










share|improve this question



















  • 2





    Possible duplicate of Is micro-optimisation important when coding?

    – gnat
    yesterday






  • 15





    For what it's worth, your two code examples do not differ in number of responsibilities. The SRP is not a method counting exercise.

    – Robert Harvey
    yesterday






  • 1





    @RobertHarvey You're right, sorry for the poor example and I'll edit a better one when I have the time. In either case, readability and maintanability suffers and eventually the SRP breaks down within the codebase as we cut down on classes and their methods.

    – lucasgcb
    yesterday







  • 2





    note that function calls are expensive in any language, though AOT compilers have the luxury of inlining

    – Eevee
    yesterday






  • 3





    Use a JITted implementation of python such as PyPy. Should mostly fix this problem.

    – Bakuriu
    yesterday













7












7








7








Some base points:




  • Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough, breaking down Python code has negative impact besides readability and reuse (which is a big gain for developers, not so much for users).

  • The single responsibility principle (SRP) keeps code readable, is easier to test and maintain.

  • The project has a special kind of background where we want readable code, tests, and time performance.

For instance, code like this which invokes several methods (x4) is slower than the following one which is just one.



from operator import add

class Vector:
def __init__(self,list_of_3):
self.coordinates = list_of_3

def move(self,movement):
self.coordinates = list( map(add, self.coordinates, movement))
return self.coordinates

def revert(self):
self.coordinates = self.coordinates[::-1]
return self.coordinates

def get_coordinates(self):
return self.coordinates

## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates()


In comparison to this:



from operator import add

def move_and_revert_and_return(vector,movement):
return list( map(add, vector, movement) )[::-1]

move_and_revert_and_return([1,2,3],[1,1,1])


If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?



Are there workarounds, like some sort of pre-processor that puts things in-line for release?



Or is Python simply poor at handling code breakdown altogether?










share|improve this question
















Some base points:




  • Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough, breaking down Python code has negative impact besides readability and reuse (which is a big gain for developers, not so much for users).

  • The single responsibility principle (SRP) keeps code readable, is easier to test and maintain.

  • The project has a special kind of background where we want readable code, tests, and time performance.

For instance, code like this which invokes several methods (x4) is slower than the following one which is just one.



from operator import add

class Vector:
def __init__(self,list_of_3):
self.coordinates = list_of_3

def move(self,movement):
self.coordinates = list( map(add, self.coordinates, movement))
return self.coordinates

def revert(self):
self.coordinates = self.coordinates[::-1]
return self.coordinates

def get_coordinates(self):
return self.coordinates

## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates()


In comparison to this:



from operator import add

def move_and_revert_and_return(vector,movement):
return list( map(add, vector, movement) )[::-1]

move_and_revert_and_return([1,2,3],[1,1,1])


If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?



Are there workarounds, like some sort of pre-processor that puts things in-line for release?



Or is Python simply poor at handling code breakdown altogether?







python performance single-responsibility methods






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited yesterday









Peter Mortensen

1,11521114




1,11521114










asked yesterday









lucasgcblucasgcb

15618




15618







  • 2





    Possible duplicate of Is micro-optimisation important when coding?

    – gnat
    yesterday






  • 15





    For what it's worth, your two code examples do not differ in number of responsibilities. The SRP is not a method counting exercise.

    – Robert Harvey
    yesterday






  • 1





    @RobertHarvey You're right, sorry for the poor example and I'll edit a better one when I have the time. In either case, readability and maintanability suffers and eventually the SRP breaks down within the codebase as we cut down on classes and their methods.

    – lucasgcb
    yesterday







  • 2





    note that function calls are expensive in any language, though AOT compilers have the luxury of inlining

    – Eevee
    yesterday






  • 3





    Use a JITted implementation of python such as PyPy. Should mostly fix this problem.

    – Bakuriu
    yesterday












  • 2





    Possible duplicate of Is micro-optimisation important when coding?

    – gnat
    yesterday






  • 15





    For what it's worth, your two code examples do not differ in number of responsibilities. The SRP is not a method counting exercise.

    – Robert Harvey
    yesterday






  • 1





    @RobertHarvey You're right, sorry for the poor example and I'll edit a better one when I have the time. In either case, readability and maintanability suffers and eventually the SRP breaks down within the codebase as we cut down on classes and their methods.

    – lucasgcb
    yesterday







  • 2





    note that function calls are expensive in any language, though AOT compilers have the luxury of inlining

    – Eevee
    yesterday






  • 3





    Use a JITted implementation of python such as PyPy. Should mostly fix this problem.

    – Bakuriu
    yesterday







2




2





Possible duplicate of Is micro-optimisation important when coding?

– gnat
yesterday





Possible duplicate of Is micro-optimisation important when coding?

– gnat
yesterday




15




15





For what it's worth, your two code examples do not differ in number of responsibilities. The SRP is not a method counting exercise.

– Robert Harvey
yesterday





For what it's worth, your two code examples do not differ in number of responsibilities. The SRP is not a method counting exercise.

– Robert Harvey
yesterday




1




1





@RobertHarvey You're right, sorry for the poor example and I'll edit a better one when I have the time. In either case, readability and maintanability suffers and eventually the SRP breaks down within the codebase as we cut down on classes and their methods.

– lucasgcb
yesterday






@RobertHarvey You're right, sorry for the poor example and I'll edit a better one when I have the time. In either case, readability and maintanability suffers and eventually the SRP breaks down within the codebase as we cut down on classes and their methods.

– lucasgcb
yesterday





2




2





note that function calls are expensive in any language, though AOT compilers have the luxury of inlining

– Eevee
yesterday





note that function calls are expensive in any language, though AOT compilers have the luxury of inlining

– Eevee
yesterday




3




3





Use a JITted implementation of python such as PyPy. Should mostly fix this problem.

– Bakuriu
yesterday





Use a JITted implementation of python such as PyPy. Should mostly fix this problem.

– Bakuriu
yesterday










3 Answers
3






active

oldest

votes


















12















is Python simply poor at handling code breakdown altogether?




Unfortunately yes, Python is slow and there are many anecdotes about people drastically increasing performance by inlining functions and making their code ugly.



There is a work around, Cython, which is a compiled version of Python and much faster.



--Edit
I just wanted to address some of the comments and other answers. Although the thrust of them isnt perhaps python specific. but more general optimisation.




  1. Don't optimise untill you have a problem and then look for bottlenecks



    Generally good advice. But the assumption is that 'normal' code is usually performant. This isn't always the case. Individual languages and frameworks each have their own idiosyncracies. In this case function calls.




  2. Its only a few milliseconds, other things will be slower



    If you are running your code on a powerful desktop computer you probably don't care as long as your single user code executes in a few seconds.



    But business code tends to run for multiple users and require more than one machine to support the load. If your code runs twice as fast it means you can have twice the number of users or half the number of machines.



    If you own your machines and data centre then you generally have a big chunk of overhead in CPU power. If your code runs a bit slow, you can absorb it, at least until you need to buy a second machine.



    In these days of cloud computing where you only use exactly the compute power you require and no more, there is a direct cost for non performant code.



    Improving performance can drastically cut the main expense for a cloud based business and performance really should be front and centre.







share|improve this answer




















  • 1





    While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

    – lucasgcb
    yesterday







  • 2





    sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

    – Ewan
    yesterday







  • 2





    @Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

    – Robert Harvey
    yesterday







  • 1





    you can also try pypy, which is a JITted python

    – Eevee
    yesterday






  • 2





    @Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

    – Voo
    yesterday


















42














Many potential performance concerns are not really a problem in practice. The issue you raise may be one of them. In the vernacular, we call worrying about those problems without proof that they are actual problems premature optimization.



If you are writing a front-end for a web service, your performance is not going to be significantly affected by function calls, because the cost of sending data over a network far exceeds the time it takes to make a method call.



If you are writing a tight loop that refreshes a video screen sixty times a second, then it might matter. But at that point, I claim you have larger problems if you're trying to use Python to do that, a job for which Python is probably not well-suited.



As always, the way you find out is to measure. Run a performance profiler or some timers over your code. See if it's a real problem in practice.




The Single Responsibility Principle is not a law or mandate; it is a guideline or principle. Software design is always about trade-offs; there are no absolutes. It is not uncommon to trade off readability and/or maintainability for speed, so you may have to sacrifice SRP on the altar of performance. But don't make that tradeoff unless you know you have a performance problem.






share|improve this answer




















  • 3





    I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

    – Ewan
    yesterday






  • 2





    @Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

    – Becuzz
    yesterday






  • 7





    @Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

    – Becuzz
    yesterday






  • 3





    your AWS bills are very low indeed

    – Ewan
    yesterday






  • 4





    @Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

    – Delioth
    yesterday



















2














First, some clarifications: Python is a language. There are several different interpreters which can execute code written in the Python language. The reference implementation (CPython) is usually what is being referenced when someone talks about "Python" as if it is an implementation, but it is important to be precise when talking about performance characteristics, as they can differ wildly between implementations.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?




Case 1.)
If you have pure Python code (<= Python Language version 3.5, 3.6 has "beta level support") which only relies on pure Python modules, you can embrace SRP everywhere and use PyPy to run it. PyPy (https://morepypy.blogspot.com/2019/03/pypy-v71-released-now-uses-utf-8.html) is a Python interpreter which has a Just in Time Compiler (JIT) and can remove function call overhead as long as it has sufficient time to "warm up" by tracing the executed code (a few seconds IIRC).



If you are restricted to using the CPython interpreter, you can extract the slow functions into extensions written in C, which will be pre-compiled and not suffer from any interpreter overhead. You can still use SRP everywhere, but your code will be split between Python and C. Whether this is better or worse for maintainability than selectively abandoning SRP but sticking to only Python code depends on your team, but if you have performance critical sections of your code, it will undoubtably be faster than even the most optimized pure Python code interpreted by CPython. Many of Python's fastest mathematical libraries use this method (numpy and scipy IIRC). Which is a nice segue into Case 2...



Case 2.)
If you have Python code which uses C extensions (or relies on libraries which use C extensions), PyPy may or may not be useful depending on how they're written. See http://doc.pypy.org/en/latest/extending.html for details, but the summary is that CFFI has minimal overhead while CTypes is slower (using it with PyPy may be even slower than CPython)



Cython (https://cython.org/) is another option which I don't have as much experience with. I mention it for the sake of completeness so my answer can "stand on its own", but don't claim any expertise. From my limited usage, it felt like I had to work harder to get the same speed improvements i could get "for free" with PyPy, and if I needed something better than PyPy, it was just as easy to write my own C extension (which has the benefit if I re-use the code elsewhere or extract part of it into a library, all my code can still run under any Python Interpreter and is not required to be run by Cython).



I'm scared of being "locked into" Cython, whereas any code written for PyPy can run under CPython as well.






share|improve this answer








New contributor




Steven Jackson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • Yeah! I've been considering focus on C extensions for this case instead of abandoning the principle and writing wild code, the other answers gave me the impression it would be slow regardless unless I swapped from the reference interpreter - To clear it up, OOP would still be a sensible approach in your view?

    – lucasgcb
    7 hours ago












  • with case 1 (2nd para) do you not get the same over head calling the functions, even if the functions themselves are complied?

    – Ewan
    4 mins ago












Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "131"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f390266%2fworking-through-the-single-responsibility-principle-srp-in-python-when-calls-a%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes









12















is Python simply poor at handling code breakdown altogether?




Unfortunately yes, Python is slow and there are many anecdotes about people drastically increasing performance by inlining functions and making their code ugly.



There is a work around, Cython, which is a compiled version of Python and much faster.



--Edit
I just wanted to address some of the comments and other answers. Although the thrust of them isnt perhaps python specific. but more general optimisation.




  1. Don't optimise untill you have a problem and then look for bottlenecks



    Generally good advice. But the assumption is that 'normal' code is usually performant. This isn't always the case. Individual languages and frameworks each have their own idiosyncracies. In this case function calls.




  2. Its only a few milliseconds, other things will be slower



    If you are running your code on a powerful desktop computer you probably don't care as long as your single user code executes in a few seconds.



    But business code tends to run for multiple users and require more than one machine to support the load. If your code runs twice as fast it means you can have twice the number of users or half the number of machines.



    If you own your machines and data centre then you generally have a big chunk of overhead in CPU power. If your code runs a bit slow, you can absorb it, at least until you need to buy a second machine.



    In these days of cloud computing where you only use exactly the compute power you require and no more, there is a direct cost for non performant code.



    Improving performance can drastically cut the main expense for a cloud based business and performance really should be front and centre.







share|improve this answer




















  • 1





    While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

    – lucasgcb
    yesterday







  • 2





    sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

    – Ewan
    yesterday







  • 2





    @Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

    – Robert Harvey
    yesterday







  • 1





    you can also try pypy, which is a JITted python

    – Eevee
    yesterday






  • 2





    @Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

    – Voo
    yesterday















12















is Python simply poor at handling code breakdown altogether?




Unfortunately yes, Python is slow and there are many anecdotes about people drastically increasing performance by inlining functions and making their code ugly.



There is a work around, Cython, which is a compiled version of Python and much faster.



--Edit
I just wanted to address some of the comments and other answers. Although the thrust of them isnt perhaps python specific. but more general optimisation.




  1. Don't optimise untill you have a problem and then look for bottlenecks



    Generally good advice. But the assumption is that 'normal' code is usually performant. This isn't always the case. Individual languages and frameworks each have their own idiosyncracies. In this case function calls.




  2. Its only a few milliseconds, other things will be slower



    If you are running your code on a powerful desktop computer you probably don't care as long as your single user code executes in a few seconds.



    But business code tends to run for multiple users and require more than one machine to support the load. If your code runs twice as fast it means you can have twice the number of users or half the number of machines.



    If you own your machines and data centre then you generally have a big chunk of overhead in CPU power. If your code runs a bit slow, you can absorb it, at least until you need to buy a second machine.



    In these days of cloud computing where you only use exactly the compute power you require and no more, there is a direct cost for non performant code.



    Improving performance can drastically cut the main expense for a cloud based business and performance really should be front and centre.







share|improve this answer




















  • 1





    While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

    – lucasgcb
    yesterday







  • 2





    sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

    – Ewan
    yesterday







  • 2





    @Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

    – Robert Harvey
    yesterday







  • 1





    you can also try pypy, which is a JITted python

    – Eevee
    yesterday






  • 2





    @Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

    – Voo
    yesterday













12












12








12








is Python simply poor at handling code breakdown altogether?




Unfortunately yes, Python is slow and there are many anecdotes about people drastically increasing performance by inlining functions and making their code ugly.



There is a work around, Cython, which is a compiled version of Python and much faster.



--Edit
I just wanted to address some of the comments and other answers. Although the thrust of them isnt perhaps python specific. but more general optimisation.




  1. Don't optimise untill you have a problem and then look for bottlenecks



    Generally good advice. But the assumption is that 'normal' code is usually performant. This isn't always the case. Individual languages and frameworks each have their own idiosyncracies. In this case function calls.




  2. Its only a few milliseconds, other things will be slower



    If you are running your code on a powerful desktop computer you probably don't care as long as your single user code executes in a few seconds.



    But business code tends to run for multiple users and require more than one machine to support the load. If your code runs twice as fast it means you can have twice the number of users or half the number of machines.



    If you own your machines and data centre then you generally have a big chunk of overhead in CPU power. If your code runs a bit slow, you can absorb it, at least until you need to buy a second machine.



    In these days of cloud computing where you only use exactly the compute power you require and no more, there is a direct cost for non performant code.



    Improving performance can drastically cut the main expense for a cloud based business and performance really should be front and centre.







share|improve this answer
















is Python simply poor at handling code breakdown altogether?




Unfortunately yes, Python is slow and there are many anecdotes about people drastically increasing performance by inlining functions and making their code ugly.



There is a work around, Cython, which is a compiled version of Python and much faster.



--Edit
I just wanted to address some of the comments and other answers. Although the thrust of them isnt perhaps python specific. but more general optimisation.




  1. Don't optimise untill you have a problem and then look for bottlenecks



    Generally good advice. But the assumption is that 'normal' code is usually performant. This isn't always the case. Individual languages and frameworks each have their own idiosyncracies. In this case function calls.




  2. Its only a few milliseconds, other things will be slower



    If you are running your code on a powerful desktop computer you probably don't care as long as your single user code executes in a few seconds.



    But business code tends to run for multiple users and require more than one machine to support the load. If your code runs twice as fast it means you can have twice the number of users or half the number of machines.



    If you own your machines and data centre then you generally have a big chunk of overhead in CPU power. If your code runs a bit slow, you can absorb it, at least until you need to buy a second machine.



    In these days of cloud computing where you only use exactly the compute power you require and no more, there is a direct cost for non performant code.



    Improving performance can drastically cut the main expense for a cloud based business and performance really should be front and centre.








share|improve this answer














share|improve this answer



share|improve this answer








edited 15 hours ago

























answered yesterday









EwanEwan

43.9k33699




43.9k33699







  • 1





    While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

    – lucasgcb
    yesterday







  • 2





    sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

    – Ewan
    yesterday







  • 2





    @Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

    – Robert Harvey
    yesterday







  • 1





    you can also try pypy, which is a JITted python

    – Eevee
    yesterday






  • 2





    @Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

    – Voo
    yesterday












  • 1





    While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

    – lucasgcb
    yesterday







  • 2





    sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

    – Ewan
    yesterday







  • 2





    @Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

    – Robert Harvey
    yesterday







  • 1





    you can also try pypy, which is a JITted python

    – Eevee
    yesterday






  • 2





    @Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

    – Voo
    yesterday







1




1





While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

– lucasgcb
yesterday






While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

– lucasgcb
yesterday





2




2





sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

– Ewan
yesterday






sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

– Ewan
yesterday





2




2





@Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

– Robert Harvey
yesterday






@Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

– Robert Harvey
yesterday





1




1





you can also try pypy, which is a JITted python

– Eevee
yesterday





you can also try pypy, which is a JITted python

– Eevee
yesterday




2




2





@Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

– Voo
yesterday





@Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

– Voo
yesterday













42














Many potential performance concerns are not really a problem in practice. The issue you raise may be one of them. In the vernacular, we call worrying about those problems without proof that they are actual problems premature optimization.



If you are writing a front-end for a web service, your performance is not going to be significantly affected by function calls, because the cost of sending data over a network far exceeds the time it takes to make a method call.



If you are writing a tight loop that refreshes a video screen sixty times a second, then it might matter. But at that point, I claim you have larger problems if you're trying to use Python to do that, a job for which Python is probably not well-suited.



As always, the way you find out is to measure. Run a performance profiler or some timers over your code. See if it's a real problem in practice.




The Single Responsibility Principle is not a law or mandate; it is a guideline or principle. Software design is always about trade-offs; there are no absolutes. It is not uncommon to trade off readability and/or maintainability for speed, so you may have to sacrifice SRP on the altar of performance. But don't make that tradeoff unless you know you have a performance problem.






share|improve this answer




















  • 3





    I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

    – Ewan
    yesterday






  • 2





    @Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

    – Becuzz
    yesterday






  • 7





    @Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

    – Becuzz
    yesterday






  • 3





    your AWS bills are very low indeed

    – Ewan
    yesterday






  • 4





    @Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

    – Delioth
    yesterday
















42














Many potential performance concerns are not really a problem in practice. The issue you raise may be one of them. In the vernacular, we call worrying about those problems without proof that they are actual problems premature optimization.



If you are writing a front-end for a web service, your performance is not going to be significantly affected by function calls, because the cost of sending data over a network far exceeds the time it takes to make a method call.



If you are writing a tight loop that refreshes a video screen sixty times a second, then it might matter. But at that point, I claim you have larger problems if you're trying to use Python to do that, a job for which Python is probably not well-suited.



As always, the way you find out is to measure. Run a performance profiler or some timers over your code. See if it's a real problem in practice.




The Single Responsibility Principle is not a law or mandate; it is a guideline or principle. Software design is always about trade-offs; there are no absolutes. It is not uncommon to trade off readability and/or maintainability for speed, so you may have to sacrifice SRP on the altar of performance. But don't make that tradeoff unless you know you have a performance problem.






share|improve this answer




















  • 3





    I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

    – Ewan
    yesterday






  • 2





    @Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

    – Becuzz
    yesterday






  • 7





    @Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

    – Becuzz
    yesterday






  • 3





    your AWS bills are very low indeed

    – Ewan
    yesterday






  • 4





    @Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

    – Delioth
    yesterday














42












42








42







Many potential performance concerns are not really a problem in practice. The issue you raise may be one of them. In the vernacular, we call worrying about those problems without proof that they are actual problems premature optimization.



If you are writing a front-end for a web service, your performance is not going to be significantly affected by function calls, because the cost of sending data over a network far exceeds the time it takes to make a method call.



If you are writing a tight loop that refreshes a video screen sixty times a second, then it might matter. But at that point, I claim you have larger problems if you're trying to use Python to do that, a job for which Python is probably not well-suited.



As always, the way you find out is to measure. Run a performance profiler or some timers over your code. See if it's a real problem in practice.




The Single Responsibility Principle is not a law or mandate; it is a guideline or principle. Software design is always about trade-offs; there are no absolutes. It is not uncommon to trade off readability and/or maintainability for speed, so you may have to sacrifice SRP on the altar of performance. But don't make that tradeoff unless you know you have a performance problem.






share|improve this answer















Many potential performance concerns are not really a problem in practice. The issue you raise may be one of them. In the vernacular, we call worrying about those problems without proof that they are actual problems premature optimization.



If you are writing a front-end for a web service, your performance is not going to be significantly affected by function calls, because the cost of sending data over a network far exceeds the time it takes to make a method call.



If you are writing a tight loop that refreshes a video screen sixty times a second, then it might matter. But at that point, I claim you have larger problems if you're trying to use Python to do that, a job for which Python is probably not well-suited.



As always, the way you find out is to measure. Run a performance profiler or some timers over your code. See if it's a real problem in practice.




The Single Responsibility Principle is not a law or mandate; it is a guideline or principle. Software design is always about trade-offs; there are no absolutes. It is not uncommon to trade off readability and/or maintainability for speed, so you may have to sacrifice SRP on the altar of performance. But don't make that tradeoff unless you know you have a performance problem.







share|improve this answer














share|improve this answer



share|improve this answer








edited yesterday

























answered yesterday









Robert HarveyRobert Harvey

167k44387601




167k44387601







  • 3





    I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

    – Ewan
    yesterday






  • 2





    @Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

    – Becuzz
    yesterday






  • 7





    @Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

    – Becuzz
    yesterday






  • 3





    your AWS bills are very low indeed

    – Ewan
    yesterday






  • 4





    @Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

    – Delioth
    yesterday













  • 3





    I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

    – Ewan
    yesterday






  • 2





    @Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

    – Becuzz
    yesterday






  • 7





    @Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

    – Becuzz
    yesterday






  • 3





    your AWS bills are very low indeed

    – Ewan
    yesterday






  • 4





    @Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

    – Delioth
    yesterday








3




3





I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

– Ewan
yesterday





I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

– Ewan
yesterday




2




2





@Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

– Becuzz
yesterday





@Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

– Becuzz
yesterday




7




7





@Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

– Becuzz
yesterday





@Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

– Becuzz
yesterday




3




3





your AWS bills are very low indeed

– Ewan
yesterday





your AWS bills are very low indeed

– Ewan
yesterday




4




4





@Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

– Delioth
yesterday






@Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

– Delioth
yesterday












2














First, some clarifications: Python is a language. There are several different interpreters which can execute code written in the Python language. The reference implementation (CPython) is usually what is being referenced when someone talks about "Python" as if it is an implementation, but it is important to be precise when talking about performance characteristics, as they can differ wildly between implementations.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?




Case 1.)
If you have pure Python code (<= Python Language version 3.5, 3.6 has "beta level support") which only relies on pure Python modules, you can embrace SRP everywhere and use PyPy to run it. PyPy (https://morepypy.blogspot.com/2019/03/pypy-v71-released-now-uses-utf-8.html) is a Python interpreter which has a Just in Time Compiler (JIT) and can remove function call overhead as long as it has sufficient time to "warm up" by tracing the executed code (a few seconds IIRC).



If you are restricted to using the CPython interpreter, you can extract the slow functions into extensions written in C, which will be pre-compiled and not suffer from any interpreter overhead. You can still use SRP everywhere, but your code will be split between Python and C. Whether this is better or worse for maintainability than selectively abandoning SRP but sticking to only Python code depends on your team, but if you have performance critical sections of your code, it will undoubtably be faster than even the most optimized pure Python code interpreted by CPython. Many of Python's fastest mathematical libraries use this method (numpy and scipy IIRC). Which is a nice segue into Case 2...



Case 2.)
If you have Python code which uses C extensions (or relies on libraries which use C extensions), PyPy may or may not be useful depending on how they're written. See http://doc.pypy.org/en/latest/extending.html for details, but the summary is that CFFI has minimal overhead while CTypes is slower (using it with PyPy may be even slower than CPython)



Cython (https://cython.org/) is another option which I don't have as much experience with. I mention it for the sake of completeness so my answer can "stand on its own", but don't claim any expertise. From my limited usage, it felt like I had to work harder to get the same speed improvements i could get "for free" with PyPy, and if I needed something better than PyPy, it was just as easy to write my own C extension (which has the benefit if I re-use the code elsewhere or extract part of it into a library, all my code can still run under any Python Interpreter and is not required to be run by Cython).



I'm scared of being "locked into" Cython, whereas any code written for PyPy can run under CPython as well.






share|improve this answer








New contributor




Steven Jackson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • Yeah! I've been considering focus on C extensions for this case instead of abandoning the principle and writing wild code, the other answers gave me the impression it would be slow regardless unless I swapped from the reference interpreter - To clear it up, OOP would still be a sensible approach in your view?

    – lucasgcb
    7 hours ago












  • with case 1 (2nd para) do you not get the same over head calling the functions, even if the functions themselves are complied?

    – Ewan
    4 mins ago
















2














First, some clarifications: Python is a language. There are several different interpreters which can execute code written in the Python language. The reference implementation (CPython) is usually what is being referenced when someone talks about "Python" as if it is an implementation, but it is important to be precise when talking about performance characteristics, as they can differ wildly between implementations.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?




Case 1.)
If you have pure Python code (<= Python Language version 3.5, 3.6 has "beta level support") which only relies on pure Python modules, you can embrace SRP everywhere and use PyPy to run it. PyPy (https://morepypy.blogspot.com/2019/03/pypy-v71-released-now-uses-utf-8.html) is a Python interpreter which has a Just in Time Compiler (JIT) and can remove function call overhead as long as it has sufficient time to "warm up" by tracing the executed code (a few seconds IIRC).



If you are restricted to using the CPython interpreter, you can extract the slow functions into extensions written in C, which will be pre-compiled and not suffer from any interpreter overhead. You can still use SRP everywhere, but your code will be split between Python and C. Whether this is better or worse for maintainability than selectively abandoning SRP but sticking to only Python code depends on your team, but if you have performance critical sections of your code, it will undoubtably be faster than even the most optimized pure Python code interpreted by CPython. Many of Python's fastest mathematical libraries use this method (numpy and scipy IIRC). Which is a nice segue into Case 2...



Case 2.)
If you have Python code which uses C extensions (or relies on libraries which use C extensions), PyPy may or may not be useful depending on how they're written. See http://doc.pypy.org/en/latest/extending.html for details, but the summary is that CFFI has minimal overhead while CTypes is slower (using it with PyPy may be even slower than CPython)



Cython (https://cython.org/) is another option which I don't have as much experience with. I mention it for the sake of completeness so my answer can "stand on its own", but don't claim any expertise. From my limited usage, it felt like I had to work harder to get the same speed improvements i could get "for free" with PyPy, and if I needed something better than PyPy, it was just as easy to write my own C extension (which has the benefit if I re-use the code elsewhere or extract part of it into a library, all my code can still run under any Python Interpreter and is not required to be run by Cython).



I'm scared of being "locked into" Cython, whereas any code written for PyPy can run under CPython as well.






share|improve this answer








New contributor




Steven Jackson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • Yeah! I've been considering focus on C extensions for this case instead of abandoning the principle and writing wild code, the other answers gave me the impression it would be slow regardless unless I swapped from the reference interpreter - To clear it up, OOP would still be a sensible approach in your view?

    – lucasgcb
    7 hours ago












  • with case 1 (2nd para) do you not get the same over head calling the functions, even if the functions themselves are complied?

    – Ewan
    4 mins ago














2












2








2







First, some clarifications: Python is a language. There are several different interpreters which can execute code written in the Python language. The reference implementation (CPython) is usually what is being referenced when someone talks about "Python" as if it is an implementation, but it is important to be precise when talking about performance characteristics, as they can differ wildly between implementations.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?




Case 1.)
If you have pure Python code (<= Python Language version 3.5, 3.6 has "beta level support") which only relies on pure Python modules, you can embrace SRP everywhere and use PyPy to run it. PyPy (https://morepypy.blogspot.com/2019/03/pypy-v71-released-now-uses-utf-8.html) is a Python interpreter which has a Just in Time Compiler (JIT) and can remove function call overhead as long as it has sufficient time to "warm up" by tracing the executed code (a few seconds IIRC).



If you are restricted to using the CPython interpreter, you can extract the slow functions into extensions written in C, which will be pre-compiled and not suffer from any interpreter overhead. You can still use SRP everywhere, but your code will be split between Python and C. Whether this is better or worse for maintainability than selectively abandoning SRP but sticking to only Python code depends on your team, but if you have performance critical sections of your code, it will undoubtably be faster than even the most optimized pure Python code interpreted by CPython. Many of Python's fastest mathematical libraries use this method (numpy and scipy IIRC). Which is a nice segue into Case 2...



Case 2.)
If you have Python code which uses C extensions (or relies on libraries which use C extensions), PyPy may or may not be useful depending on how they're written. See http://doc.pypy.org/en/latest/extending.html for details, but the summary is that CFFI has minimal overhead while CTypes is slower (using it with PyPy may be even slower than CPython)



Cython (https://cython.org/) is another option which I don't have as much experience with. I mention it for the sake of completeness so my answer can "stand on its own", but don't claim any expertise. From my limited usage, it felt like I had to work harder to get the same speed improvements i could get "for free" with PyPy, and if I needed something better than PyPy, it was just as easy to write my own C extension (which has the benefit if I re-use the code elsewhere or extract part of it into a library, all my code can still run under any Python Interpreter and is not required to be run by Cython).



I'm scared of being "locked into" Cython, whereas any code written for PyPy can run under CPython as well.






share|improve this answer








New contributor




Steven Jackson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.










First, some clarifications: Python is a language. There are several different interpreters which can execute code written in the Python language. The reference implementation (CPython) is usually what is being referenced when someone talks about "Python" as if it is an implementation, but it is important to be precise when talking about performance characteristics, as they can differ wildly between implementations.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?




Case 1.)
If you have pure Python code (<= Python Language version 3.5, 3.6 has "beta level support") which only relies on pure Python modules, you can embrace SRP everywhere and use PyPy to run it. PyPy (https://morepypy.blogspot.com/2019/03/pypy-v71-released-now-uses-utf-8.html) is a Python interpreter which has a Just in Time Compiler (JIT) and can remove function call overhead as long as it has sufficient time to "warm up" by tracing the executed code (a few seconds IIRC).



If you are restricted to using the CPython interpreter, you can extract the slow functions into extensions written in C, which will be pre-compiled and not suffer from any interpreter overhead. You can still use SRP everywhere, but your code will be split between Python and C. Whether this is better or worse for maintainability than selectively abandoning SRP but sticking to only Python code depends on your team, but if you have performance critical sections of your code, it will undoubtably be faster than even the most optimized pure Python code interpreted by CPython. Many of Python's fastest mathematical libraries use this method (numpy and scipy IIRC). Which is a nice segue into Case 2...



Case 2.)
If you have Python code which uses C extensions (or relies on libraries which use C extensions), PyPy may or may not be useful depending on how they're written. See http://doc.pypy.org/en/latest/extending.html for details, but the summary is that CFFI has minimal overhead while CTypes is slower (using it with PyPy may be even slower than CPython)



Cython (https://cython.org/) is another option which I don't have as much experience with. I mention it for the sake of completeness so my answer can "stand on its own", but don't claim any expertise. From my limited usage, it felt like I had to work harder to get the same speed improvements i could get "for free" with PyPy, and if I needed something better than PyPy, it was just as easy to write my own C extension (which has the benefit if I re-use the code elsewhere or extract part of it into a library, all my code can still run under any Python Interpreter and is not required to be run by Cython).



I'm scared of being "locked into" Cython, whereas any code written for PyPy can run under CPython as well.







share|improve this answer








New contributor




Steven Jackson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this answer



share|improve this answer






New contributor




Steven Jackson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









answered 8 hours ago









Steven JacksonSteven Jackson

211




211




New contributor




Steven Jackson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Steven Jackson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Steven Jackson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












  • Yeah! I've been considering focus on C extensions for this case instead of abandoning the principle and writing wild code, the other answers gave me the impression it would be slow regardless unless I swapped from the reference interpreter - To clear it up, OOP would still be a sensible approach in your view?

    – lucasgcb
    7 hours ago












  • with case 1 (2nd para) do you not get the same over head calling the functions, even if the functions themselves are complied?

    – Ewan
    4 mins ago


















  • Yeah! I've been considering focus on C extensions for this case instead of abandoning the principle and writing wild code, the other answers gave me the impression it would be slow regardless unless I swapped from the reference interpreter - To clear it up, OOP would still be a sensible approach in your view?

    – lucasgcb
    7 hours ago












  • with case 1 (2nd para) do you not get the same over head calling the functions, even if the functions themselves are complied?

    – Ewan
    4 mins ago

















Yeah! I've been considering focus on C extensions for this case instead of abandoning the principle and writing wild code, the other answers gave me the impression it would be slow regardless unless I swapped from the reference interpreter - To clear it up, OOP would still be a sensible approach in your view?

– lucasgcb
7 hours ago






Yeah! I've been considering focus on C extensions for this case instead of abandoning the principle and writing wild code, the other answers gave me the impression it would be slow regardless unless I swapped from the reference interpreter - To clear it up, OOP would still be a sensible approach in your view?

– lucasgcb
7 hours ago














with case 1 (2nd para) do you not get the same over head calling the functions, even if the functions themselves are complied?

– Ewan
4 mins ago






with case 1 (2nd para) do you not get the same over head calling the functions, even if the functions themselves are complied?

– Ewan
4 mins ago


















draft saved

draft discarded
















































Thanks for contributing an answer to Software Engineering Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f390266%2fworking-through-the-single-responsibility-principle-srp-in-python-when-calls-a%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







-methods, performance, python, single-responsibility

Popular posts from this blog

Mobil Contents History Mobil brands Former Mobil brands Lukoil transaction Mobil UK Mobil Australia Mobil New Zealand Mobil Greece Mobil in Japan Mobil in Canada Mobil Egypt See also References External links Navigation menuwww.mobil.com"Mobil Corporation"the original"Our Houston campus""Business & Finance: Socony-Vacuum Corp.""Popular Mechanics""Lubrite Technologies""Exxon Mobil campus 'clearly happening'""Toledo Blade - Google News Archive Search""The Lion and the Moose - How 2 Executives Pulled off the Biggest Merger Ever""ExxonMobil Press Release""Lubricants""Archived copy"the original"Mobil 1™ and Mobil Super™ motor oil and synthetic motor oil - Mobil™ Motor Oils""Mobil Delvac""Mobil Industrial website""The State of Competition in Gasoline Marketing: The Effects of Refiner Operations at Retail""Mobil Travel Guide to become Forbes Travel Guide""Hotel Rankings: Forbes Merges with Mobil"the original"Jamieson oil industry history""Mobil news""Caltex pumps for control""Watchdog blocks Caltex bid""Exxon Mobil sells service station network""Mobil Oil New Zealand Limited is New Zealand's oldest oil company, with predecessor companies having first established a presence in the country in 1896""ExxonMobil subsidiaries have a business history in New Zealand stretching back more than 120 years. We are involved in petroleum refining and distribution and the marketing of fuels, lubricants and chemical products""Archived copy"the original"Exxon Mobil to Sell Its Japanese Arm for $3.9 Billion""Gas station merger will end Esso and Mobil's long run in Japan""Esso moves to affiliate itself with PC Optimum, no longer Aeroplan, in loyalty point switch""Mobil brand of gas stations to launch in Canada after deal for 213 Loblaws-owned locations""Mobil Nears Completion of Rebranding 200 Loblaw Gas Stations""Learn about ExxonMobil's operations in Egypt""Petrol and Diesel Service Stations in Egypt - Mobil"Official websiteExxon Mobil corporate websiteMobil Industrial official websiteeeeeeeeDA04275022275790-40000 0001 0860 5061n82045453134887257134887257

Frič See also Navigation menuinternal link

Identify plant with long narrow paired leaves and reddish stems Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?What is this plant with long sharp leaves? Is it a weed?What is this 3ft high, stalky plant, with mid sized narrow leaves?What is this young shrub with opposite ovate, crenate leaves and reddish stems?What is this plant with large broad serrated leaves?Identify this upright branching weed with long leaves and reddish stemsPlease help me identify this bulbous plant with long, broad leaves and white flowersWhat is this small annual with narrow gray/green leaves and rust colored daisy-type flowers?What is this chilli plant?Does anyone know what type of chilli plant this is?Help identify this plant