Python ошибка killed

I have a Python script that imports a large CSV file and then counts the number of occurrences of each word in the file, then exports the counts to another CSV file.

But what is happening is that once that counting part is finished and the exporting begins it says Killed in the terminal.

I don’t think this is a memory problem (if it was I assume I would be getting a memory error and not Killed).

Could it be that the process is taking too long? If so, is there a way to extend the time-out period so I can avoid this?

Here is the code:

csv.field_size_limit(sys.maxsize)
    counter={}
    with open("/home/alex/Documents/version2/cooccur_list.csv",'rb') as file_name:
        reader=csv.reader(file_name)
        for row in reader:
            if len(row)>1:
                pair=row[0]+' '+row[1]
                if pair in counter:
                    counter[pair]+=1
                else:
                    counter[pair]=1
    print 'finished counting'
    writer = csv.writer(open('/home/alex/Documents/version2/dict.csv', 'wb'))
    for key, value in counter.items():
        writer.writerow([key, value])

And the Killed happens after finished counting has printed, and the full message is:

killed (program exited with code: 137)

questionto42's user avatar

questionto42

6,5864 gold badges52 silver badges86 bronze badges

asked Oct 4, 2013 at 19:44

user1893354's user avatar

user1893354user1893354

5,71812 gold badges46 silver badges82 bronze badges

3

Exit code 137 (128+9) indicates that your program exited due to receiving signal 9, which is SIGKILL. This also explains the killed message. The question is, why did you receive that signal?

The most likely reason is probably that your process crossed some limit in the amount of system resources that you are allowed to use. Depending on your OS and configuration, this could mean you had too many open files, used too much filesytem space or something else. The most likely is that your program was using too much memory. Rather than risking things breaking when memory allocations started failing, the system sent a kill signal to the process that was using too much memory.

As I commented earlier, one reason you might hit a memory limit after printing finished counting is that your call to counter.items() in your final loop allocates a list that contains all the keys and values from your dictionary. If your dictionary had a lot of data, this might be a very big list. A possible solution would be to use counter.iteritems() which is a generator. Rather than returning all the items in a list, it lets you iterate over them with much less memory usage.

So, I’d suggest trying this, as your final loop:

for key, value in counter.iteritems():
    writer.writerow([key, value])

Note that in Python 3, items returns a «dictionary view» object which does not have the same overhead as Python 2’s version. It replaces iteritems, so if you later upgrade Python versions, you’ll end up changing the loop back to the way it was.

answered Oct 5, 2013 at 0:02

Blckknght's user avatar

BlckknghtBlckknght

100k11 gold badges120 silver badges168 bronze badges

1

There are two storage areas involved: the stack and the heap.The stack is where the current state of a method call is kept (ie local variables and references), and the heap is where objects are stored. recursion and memory

I gues there are too many keys in the counter dict that will consume too much memory of the heap region, so the Python runtime will raise a OutOfMemory exception.

To save it, don’t create a giant object, e.g. the counter.

1.StackOverflow

a program that create too many local variables.

Python 2.7.9 (default, Mar  1 2015, 12:57:24) 
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> f = open('stack_overflow.py','w')
>>> f.write('def foo():n')
>>> for x in xrange(10000000):
...   f.write('tx%d = %dn' % (x, x))
... 
>>> f.write('foo()')
>>> f.close()
>>> execfile('stack_overflow.py')
Killed

2.OutOfMemory

a program that creats a giant dict includes too many keys.

>>> f = open('out_of_memory.py','w')
>>> f.write('def foo():n')
>>> f.write('tcounter = {}n')
>>> for x in xrange(10000000):
...   f.write('counter[%d] = %dn' % (x, x))
... 
>>> f.write('foo()n')
>>> f.close()
>>> execfile('out_of_memory.py')
Killed

References

  • 7. Memory : Stack vs Heap
  • recursion and memory

Joël Brigate's user avatar

answered Apr 2, 2016 at 6:14

ROY's user avatar

0

Most likely, you ran out of memory, so the Kernel killed your process.

Have you heard about OOM Killer?

Here’s a log from a script that I developed for processing a huge set of data from CSV files:

Mar 12 18:20:38 server.com kernel: [63802.396693] Out of memory: Kill process 12216 (python3) score 915 or sacrifice child
Mar 12 18:20:38 server.com kernel: [63802.402542] Killed process 12216 (python3) total-vm:9695784kB, anon-rss:7623168kB, file-rss:4kB, shmem-rss:0kB
Mar 12 18:20:38 server.com kernel: [63803.002121] oom_reaper: reaped process 12216 (python3), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

It was taken from /var/log/syslog.

Basically:

PID 12216 elected as a victim (due to its use of +9Gb of total-vm), so oom_killer reaped it.

Here’s a article about OOM behavior.

answered Mar 12, 2020 at 20:20

ivanleoncz's user avatar

ivanleonczivanleoncz

8,7906 gold badges56 silver badges48 bronze badges

2

I doubt anything is killing the process just because it takes a long time. Killed generically means something from the outside terminated the process, but probably not in this case hitting Ctrl-C since that would cause Python to exit on a KeyboardInterrupt exception. Also, in Python you would get MemoryError exception if that was the problem. What might be happening is you’re hitting a bug in Python or standard library code that causes a crash of the process.

answered Oct 4, 2013 at 19:52

Wingware's user avatar

WingwareWingware

8865 silver badges12 bronze badges

2

I just had the same happen on me when I tried to run a python script from a shared folder in VirtualBox within the new Ubuntu 20.04 LTS. Python bailed with Killed while loading my own personal library. When I moved the folder to a local directory, the issue went away. It appears that the Killed stop happened during the initial imports of my library as I got messages of missing libraries once I moved the folder over.

The issue went away after I restarted my computer.

Therefore, people may want to try moving the program to a local directory if its over a share of some kind or it could be a transient problem that just requires a reboot of the OS.

answered Apr 27, 2020 at 1:52

Timothy C. Quinn's user avatar

Timothy C. QuinnTimothy C. Quinn

3,5891 gold badge35 silver badges45 bronze badges

2

Сначала нужно понять от чего умирает скрипт, если это oom, смотри логи системы
dmesg -T | egrep -i 'killed process'
Нехватка памяти может быть редко решена добавлением свопа (создать файл или раздел, проинициализировать его с помощью mkswap, включить swapon), например когда ее не хватает считанные десятки процентов, бывает редко можно и больше, от задач.

Некоторые не честные хостеры пишут не верную информацию об доступном объеме оперативной памяти, или к примеру выставляют разные ограничения, например лимит памяти на процесс в половину или треть от доступной, соответственно это решать с хостером.

Ну или подумать как переделать скрипт

I’m running some python programs that are quite heavy. I’ve been running this script for several weeks now, but in the past couple of days, the program gets killed with the message:

Killed

I tried creating a new swap file with 8 GB, but it kept happening.

I also tried using:

dmesg -T| grep -E -i -B100 'killed process'

which listed out the error:

[Sat Oct 17 02:08:41 2020] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/user@1000.service,task=python,pid=56849,uid=1000
[Sat Oct 17 02:08:41 2020] Out of memory: Killed process 56849 (python) total-vm:21719376kB, anon-rss:14311012kB, file-rss:0kB, shmem-rss:4kB, UID:1000 pgtables:40572kB oom_score_adj:0

I have a strong machine and I tried also not running anything else when running ( Pycharm or terminal) but it keeps happening.

specs:

  • Ubuntu 20.04 LTS (64bit)
  • 15.4 GiB RAM
  • Intel Core i7-105100 CPU @ 1.80 GHz x 8

when running free -h t

             total        used        free      shared  buff/cache   available
Mem:           15Gi       2.4Gi        10Gi       313Mi       2.0Gi        12Gi
Swap:         8.0Gi       1.0Gi       7.0Gi

  1. What does it mean if I get the message «Killed» after hitting Ctrl+C whilst running a Python script?

    Usually hitting Ctrl+C gives the message «KeyboardInterrupt», but when running my program for a slightly larger integer I got the message «Killed» instead. Does this mean that the numbers became to large to cope with?

    I tried googling but wasn’t successful, so explanation would much appreciated — even a link to a useful webpage would help.

    Thanks!


  2. Re: Python «Killed»

    It would depend on what you were doing, but it almost sounds like it was a cleaner interrupt than a regular keyboard interrupt.

    Normally keyboard interrupt means «stop NOW» while a killed process does a proper clean up and exit. A little bit different than organic kills.


  3. Re: Python «Killed»

    Hold on a sec.

    I think you mean to say if you run your program with «a larger integer» (you realize that’s useless info here, right), then you get the «killed» message without you hitting Control-C.

    Is that correct?


  4. Re: Python «Killed»

    Sorry, «a larger integer» was indeed unhelpful. I’m trying to find the decomposition groups of polynomials mod p for various values of p. For one polynomial, p=37 gave an answer within a few seconds (and if interrupted by Ctrl+C, gave the KeyboardInterrupt message I’m used to). For the same polynomial, p=43 gave no response for several minutes, and when Ctrl+C is pressed, gives the «Killed» message.

    Could it be that the process has already been killed by the time I press Ctrl+C, but simply hasn’t yet printed the message? And what sort of problem causes Python to decide to kill a process?

    Thanks for the replies!


  5. Re: Python «Killed»

    You were doing a mathematical algorithm without bounds checking….

    «Integer too Large» is one thought… 64 bit integer signed is 63 bits with 1 bit for sign…

    if the math crosses into the 65th bit it will cause a interrupt on the processor. App will be killed instantly.


  6. Re: Python «Killed»

    Quote Originally Posted by imaginaryfruit
    View Post

    Sorry, «a larger integer» was indeed unhelpful. I’m trying to find the decomposition groups of polynomials mod p for various values of p. For one polynomial, p=37 gave an answer within a few seconds (and if interrupted by Ctrl+C, gave the KeyboardInterrupt message I’m used to). For the same polynomial, p=43 gave no response for several minutes, and when Ctrl+C is pressed, gives the «Killed» message.

    Could it be that the process has already been killed by the time I press Ctrl+C, but simply hasn’t yet printed the message? And what sort of problem causes Python to decide to kill a process?

    Thanks for the replies!

    Hmmm, that’s a little weird.

    If you had said you die without Control-C and a message «killed», then I would have replied with that your program probably ran out of swapspace system-wide and the kernel decided to kill the python process when it had to pick something. That would match both the error message and the fact that you are doing some exponential problem.

    Are you doing recursion there? Any chance you can describe in more detail what is going on inside the program?

    Are you on a 32 bit OS and how much swap do you have? Does the swap get filled up when you run with the large value?

    It could still what I say above and either as you say a delayed message, or that the Python interpreter tried to deal with an out-of-memory condition (or stack overflow) during this time.


  7. Re: Python «Killed»

    Quote Originally Posted by azagaros
    View Post

    You were doing a mathematical algorithm without bounds checking….

    «Integer too Large» is one thought… 64 bit integer signed is 63 bits with 1 bit for sign…

    if the math crosses into the 65th bit it will cause a interrupt on the processor. App will be killed instantly.

    I don’t think Python has integer overflow checking once you run out of «long» and even if they did, why wouldn’t they throw an exception?

    The processor itself does not throw an interrupt (exception in CPU speech) on integer overflow or underflow. It sets a bit that is ignored by C, C++, Java etc but read by many other languages to either fall back to a bignum (or floating point in the case of same naive languages) or to raise an exception (language exception, not CPU exception).


  8. Re: Python «Killed»

    I tried some different polynomial-prime combinations, and one did indeed give me MemoryError, so presumably that was the problem all along — though I don’t know why it didn’t give me that message for the earlier examples.

    I checked my program, and realised that while I’d found a way to avoid using huge polynomials, I hadn’t removed the statements defining them. I removed those and it works fine now.

    What is swap? (I’m new to programming, as you may have guessed.)

    Thanks for all the help!


  9. Re: Python «Killed»

    are you using an external library that might catch the keyboard interrupt and give its one message?

    when i run long simulations i usually have some sort of progress output to that i can see that its still working. suppose you have something that loops a million times, then use

    Code:

    for x in xrange(1e6):
      if (x%1e4 == 0): print x, "% done"
      ...

    to watch progress.

    PS: are you using numpy/scipy. if you are doing mathematical things you should have a look at them.


  10. Re: Python «Killed»

    Quote Originally Posted by imaginaryfruit
    View Post

    I tried some different polynomial-prime combinations, and one did indeed give me MemoryError, so presumably that was the problem all along — though I don’t know why it didn’t give me that message for the earlier examples.

    I checked my program, and realised that while I’d found a way to avoid using huge polynomials, I hadn’t removed the statements defining them. I removed those and it works fine now.

    What is swap? (I’m new to programming, as you may have guessed.)

    Thanks for all the help!

    Assuming that my knee-jerk assessment that you were facing an out-of memory condition is correct, you ran out of backing storage for read/write memory pages.

    The total amount of read/write memory pages all your programs can have is RAM + paging space (minus kernel internal things, of course). Plus readonly file mappings which are mostly «free» since they don’t need backing space.

    Use `free` and `cat /proc/swaps` to monitor swapspace. If the assumption is correct you’ll see how it gets all eaten up and then the kernel kills your memory hog.

    ssam, he’s not getting the same behavior when he’s using the same program with a different upper bound. If he would be using a library that catches signals he would get the same behavior with the smaller upper bound.


Hi,

Overview
=======

I’m doing some simple file manipulation work and the process gets
«Killed» everytime I run it. No traceback, no segfault… just the
word «Killed» in the bash shell and the process ends. The first few
batch runs would only succeed with one or two files being processed
(out of 60) before the process was «Killed». Now it makes no
successful progress at all. Just a little processing then «Killed».
Question
=======

Any Ideas? Is there a buffer limitation? Do you think it could be the
filesystem?
Any suggestions appreciated…. Thanks.
The code I’m running:
==================

from glob import glob

def manipFiles():
filePathList = glob(‘/data/ascii/*.dat’)
for filePath in filePathList:
f = open(filePath, ‘r’)
lines = f.readlines()[2:]
f.close()
f = open(filePath, ‘w’)
f.writelines(lines)
f.close()
print file
Sample lines in File:
================

# time, ap, bp, as, bs, price, vol, size, seq, isUpLast, isUpVol,
isCancel

1062993789 0 0 0 0 1022.75 1 1 0 1 0 0
1073883668 1120 1119.75 28 33 0 0 0 0 0 0 0
Other Info
========

— The file sizes range from 76 Kb to 146 Mb
— I’m running on a Gentoo Linux OS
— The filesystem is partitioned and using: XFS for the data
repository, Reiser3 for all else.

score:-3

This happened for me too while reading large files. You can try rebooting your system and this should stop.

Pyzard

360

У меня нет понимания того, что здесь произошло. Может кто-нибудь, пожалуйста, объясните, почему Python3.4 «убит» этот сценарий:Что означает эта ошибка «Killed» в Python?

def __init__(self, target, data_flatten, data, 
       tf, hlf, white, robert, sobel, scharr): 
    self.data_flatten = data_flatten 
    self.target = target 
    self.data = data 
    self.tf = tf 
    self.hlf = hlf 
    self.white = white 
    self.robert = robert 
    self.sobel = sobel 
    self.scharr = scharr 

with open('PI0_Electron_Mixed_2000.pickle', 'wb') as output: 
    pickle.dump(PI0_Electron_Mixed_2000, output) 

Вот выход, когда я запускал скрипт в моем терминале:

[[email protected] ~]$ cd PycharmProjects/ImageReader 
[[email protected] ImageReader]$ python3.4 DataCompiler.py 
Killed 
[[email protected] ImageReader]$ 

Так что же случилось, черт возьми, может кто-нибудь объяснить, ?


Go to linuxquestions


r/linuxquestions


r/linuxquestions




Members





Online



by

Pherrret



zsh killing my python script

When I run my python program in zsh using python3 Mosesh.py I get the message zsh: killed python3 Mosesh.py after a few seconds. All the program is doing at the moment is creating a list of data to work with but at most that should be about 14kB so being out of ram doesn’t seem to be the issue as most of my googling has suggested. I’ve also noticed the issue persists in other shells such as bash. Any ideas what the issue is or how to fix it? Thanks.

Понравилась статья? Поделить с друзьями:
  • Python ошибка encoding
  • Python ошибка e999
  • Python ошибка 9009
  • Python ошибка 500
  • Python ошибка 0x80240017