content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
How to make index=False or get rid of first column while using MultiIndex and to_excel in Python
Here is the code sample:
import numpy as np
import pandas as pd
import xlsxwriter
tuples = [('bar', 'one'), ('bar', 'two'), ('baz', 'one'), ('baz', 'two'), ('foo', 'one'), ('foo', 'two'), ('qux', 'one'), ('qux', 'two')]
index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
iterables = [['bar', 'baz', 'foo', 'qux'], ['one', 'two']]
pd.MultiIndex.from_product(iterables, names=['first', 'second'])
df = pd.DataFrame(np.random.randn(3, 8), index=['A', 'B', 'C'], columns=index)
print(df)
writer = pd.ExcelWriter('test.xlsx', engine='xlsxwriter')
df.to_excel(writer, sheet_name='test1')
The excel output created:
Now how to get rid of first column.
Even if I don't mention index=['A', 'B', 'C'] or names=['first', 'second']
It'll by default create index=[0, 1, 2]
So how to get rid of that first column while creating the excel.
A:
Here's a 5 lines fix -
Original code -
tuples = [('bar', 'one'), ('bar', 'two'), ('baz', 'one'), ('baz', 'two'), ('foo', 'one'), ('foo', 'two'), ('qux', 'one'), ('qux', 'two')]
index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
iterables = [['bar', 'baz', 'foo', 'qux'], ['one', 'two']]
df = pd.DataFrame(np.random.randn(3, 8), columns=index)
New 5 lines to be added after above code -
# Setting first column as index
df = df.set_index(('bar', 'one'))
# Removing 'bar', 'one' frm index name
df.index.name = ''
# Setting new columns Multiindex
tuples = [('', 'two'), ('baz', 'one'), ('baz', 'two'), ('foo', 'one'), ('foo', 'two'), ('qux', 'one'), ('qux', 'two')]
index_new = pd.MultiIndex.from_tuples(tuples, names=['bar', 'one'])
df.columns = index_new
Later write to excel as you are doing -
# Writing to excel file keeping index
writer = pd.ExcelWriter('test.xlsx', engine='xlsxwriter')
df.to_excel(writer, sheet_name='test1')
Note - There's just a small drawback that cell A1 and B1 are not merged.
A:
@meW thanks for your solution.
I was able to make this work with pivot tables. I thought this may help others as well.
I have flattened the column headings to a single row in the first code snippet.
I was also able to make it work with multiple column heading rows, however, I noticed the Columns name value (ie. Fee) cannot be displayed, instead a blank row in the columns headers exists and also exist when saving it to_excel.
students = pd.DataFrame({'Student Names' : ['Jenny', 'Singh', 'Charles', 'Richard', 'Veena'],
'Category' : ['Online', 'Offline', 'Offline', 'Offline', 'Online'],
'Gender' : ['Female', 'Male', 'Male', 'Male', 'Female'],
'Courses': ['Java', 'Spark', 'PySpark','Hadoop','C'],
'Fee': [15000, 17000, 27000, 29000, 12000],
'Discount': [1100, 800, 1000, 1600, 600]})
Flatten Column Headings to single row
pv = pd.pivot_table(students, index=['Gender','Courses'], columns=['Fee'], values=['Discount','Category'], aggfunc = {'Discount':'mean','Category':'count'}, fill_value = 0 )
pv.columns = pd.Index( [ '_'.join([str(c) for c in c_list]) for c_list in pv.columns.values ] )
pv = pv.reset_index()
first_column = pv.columns[0]
pv=pv.set_index(first_column)
writer = pd.ExcelWriter('test.xlsx', engine='xlsxwriter')
pv.to_excel(writer, sheet_name='test1')
writer.save()
Multi Column Headings
A blank 3rd row remains.
pv = pd.pivot_table(students, index=['Gender','Courses'], columns=['Fee'], values=['Discount','Category'], aggfunc = {'Discount':'mean','Category':'count'}, fill_value = 0 )
pv = pv.reset_index()
first_column = pv.columns[0]
pv=pv.set_index(first_column)
pv.index.name=''
index_new = pd.MultiIndex.from_tuples(pv.columns.values, names=first_column)
pv.columns = index_new
writer = pd.ExcelWriter('test.xlsx', engine='xlsxwriter')
pv.to_excel(writer, sheet_name='test1')
writer.save()
Update: For flattening it's made a bit easier now.
p = pd.pivot_table(students, index=['Gender','Courses'], columns=['Fee'], values=['Discount','Category'], aggfunc = {'Discount':'mean','Category':'count'}, fill_value = 0 )
p.columns = [ '_'.join([str(k) for k in cols]) for cols in p.columns.to_flat_index() ]
p=p.reset_index()
writer = pd.ExcelWriter('test2.xlsx', engine='xlsxwriter')
p.to_excel(writer, sheet_name='test2',index=False)
writer.save()
Thanks - this really helped me out.
|
How to make index=False or get rid of first column while using MultiIndex and to_excel in Python
|
Here is the code sample:
import numpy as np
import pandas as pd
import xlsxwriter
tuples = [('bar', 'one'), ('bar', 'two'), ('baz', 'one'), ('baz', 'two'), ('foo', 'one'), ('foo', 'two'), ('qux', 'one'), ('qux', 'two')]
index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
iterables = [['bar', 'baz', 'foo', 'qux'], ['one', 'two']]
pd.MultiIndex.from_product(iterables, names=['first', 'second'])
df = pd.DataFrame(np.random.randn(3, 8), index=['A', 'B', 'C'], columns=index)
print(df)
writer = pd.ExcelWriter('test.xlsx', engine='xlsxwriter')
df.to_excel(writer, sheet_name='test1')
The excel output created:
Now how to get rid of first column.
Even if I don't mention index=['A', 'B', 'C'] or names=['first', 'second']
It'll by default create index=[0, 1, 2]
So how to get rid of that first column while creating the excel.
|
[
"Here's a 5 lines fix - \nOriginal code -\ntuples = [('bar', 'one'), ('bar', 'two'), ('baz', 'one'), ('baz', 'two'), ('foo', 'one'), ('foo', 'two'), ('qux', 'one'), ('qux', 'two')]\nindex = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])\niterables = [['bar', 'baz', 'foo', 'qux'], ['one', 'two']]\ndf = pd.DataFrame(np.random.randn(3, 8), columns=index) \n\nNew 5 lines to be added after above code -\n# Setting first column as index\ndf = df.set_index(('bar', 'one'))\n\n# Removing 'bar', 'one' frm index name\ndf.index.name = ''\n\n# Setting new columns Multiindex\ntuples = [('', 'two'), ('baz', 'one'), ('baz', 'two'), ('foo', 'one'), ('foo', 'two'), ('qux', 'one'), ('qux', 'two')]\nindex_new = pd.MultiIndex.from_tuples(tuples, names=['bar', 'one'])\ndf.columns = index_new\n\nLater write to excel as you are doing -\n# Writing to excel file keeping index\nwriter = pd.ExcelWriter('test.xlsx', engine='xlsxwriter')\ndf.to_excel(writer, sheet_name='test1')\n\n\nNote - There's just a small drawback that cell A1 and B1 are not merged.\n",
"@meW thanks for your solution.\nI was able to make this work with pivot tables. I thought this may help others as well.\nI have flattened the column headings to a single row in the first code snippet.\nI was also able to make it work with multiple column heading rows, however, I noticed the Columns name value (ie. Fee) cannot be displayed, instead a blank row in the columns headers exists and also exist when saving it to_excel.\nstudents = pd.DataFrame({'Student Names' : ['Jenny', 'Singh', 'Charles', 'Richard', 'Veena'],\n 'Category' : ['Online', 'Offline', 'Offline', 'Offline', 'Online'],\n 'Gender' : ['Female', 'Male', 'Male', 'Male', 'Female'],\n 'Courses': ['Java', 'Spark', 'PySpark','Hadoop','C'],\n 'Fee': [15000, 17000, 27000, 29000, 12000],\n 'Discount': [1100, 800, 1000, 1600, 600]})\n\nFlatten Column Headings to single row\npv = pd.pivot_table(students, index=['Gender','Courses'], columns=['Fee'], values=['Discount','Category'], aggfunc = {'Discount':'mean','Category':'count'}, fill_value = 0 )\npv.columns = pd.Index( [ '_'.join([str(c) for c in c_list]) for c_list in pv.columns.values ] )\npv = pv.reset_index()\nfirst_column = pv.columns[0]\npv=pv.set_index(first_column)\nwriter = pd.ExcelWriter('test.xlsx', engine='xlsxwriter')\npv.to_excel(writer, sheet_name='test1')\nwriter.save()\n\n\nMulti Column Headings\nA blank 3rd row remains.\npv = pd.pivot_table(students, index=['Gender','Courses'], columns=['Fee'], values=['Discount','Category'], aggfunc = {'Discount':'mean','Category':'count'}, fill_value = 0 )\npv = pv.reset_index()\nfirst_column = pv.columns[0]\npv=pv.set_index(first_column)\npv.index.name=''\nindex_new = pd.MultiIndex.from_tuples(pv.columns.values, names=first_column)\npv.columns = index_new\nwriter = pd.ExcelWriter('test.xlsx', engine='xlsxwriter')\npv.to_excel(writer, sheet_name='test1')\nwriter.save()\n\n\nUpdate: For flattening it's made a bit easier now.\np = pd.pivot_table(students, index=['Gender','Courses'], columns=['Fee'], values=['Discount','Category'], aggfunc = {'Discount':'mean','Category':'count'}, fill_value = 0 )\np.columns = [ '_'.join([str(k) for k in cols]) for cols in p.columns.to_flat_index() ]\np=p.reset_index()\nwriter = pd.ExcelWriter('test2.xlsx', engine='xlsxwriter')\np.to_excel(writer, sheet_name='test2',index=False)\nwriter.save()\n\nThanks - this really helped me out.\n"
] |
[
3,
0
] |
[] |
[] |
[
"multi_index",
"pandas",
"python",
"xlsxwriter"
] |
stackoverflow_0054898713_multi_index_pandas_python_xlsxwriter.txt
|
Q:
.env file not gitignored. I had someone do it manually for me once
So im currently working on a project and my my .env file is not greyed out (gitignore?). Trying to figure out what I need to do globally because I do have the file but my .env is never greyed out. Any suggestions? I can provide screenshots if needed.
I had someone do a a few commands in the terminal and was able to get my .env to go grey once. But I believe he told me he wasn't able to do it globally.
Reach out for help.
A:
First, make sure your .env is not already tracked (or any amount of .gitignore or core.excludesFile would not change anything)
cd /apth/to/.env
git rm --cached .env
Then check if it is currently ignored with:
git check-ignore -v -- .env
|
.env file not gitignored. I had someone do it manually for me once
|
So im currently working on a project and my my .env file is not greyed out (gitignore?). Trying to figure out what I need to do globally because I do have the file but my .env is never greyed out. Any suggestions? I can provide screenshots if needed.
I had someone do a a few commands in the terminal and was able to get my .env to go grey once. But I believe he told me he wasn't able to do it globally.
Reach out for help.
|
[
"First, make sure your .env is not already tracked (or any amount of .gitignore or core.excludesFile would not change anything)\ncd /apth/to/.env\ngit rm --cached .env\n\nThen check if it is currently ignored with:\ngit check-ignore -v -- .env\n\n"
] |
[
0
] |
[] |
[] |
[
"django",
"env_file",
"gitignore",
"python"
] |
stackoverflow_0074576728_django_env_file_gitignore_python.txt
|
Q:
"ObjectId' object is not iterable" error, while fetching data from MongoDB Atlas
Okay, so pardon me if I don't make much sense. I face this 'ObjectId' object is not iterable whenever I run the collections.find() functions. Going through the answers here, I'm not sure where to start. I'm new to programming, please bear with me.
Every time I hit the route which is supposed to fetch me data from Mongodb, I getValueError: [TypeError("'ObjectId' object is not iterable"), TypeError('vars() argument must have __dict__ attribute')].
Help
A:
Exclude the "_id" from the output.
result = collection.find_one({'OpportunityID': oppid}, {'_id': 0})
A:
I was having a similar problem to this myself. Not having seen your code I am guessing the traceback similarly traces the error to FastAPI/Starlette not being able to process the "_id" field - what you will therefore need to do is change the "_id" field in the results from an ObjectId to a string type and rename the field to "id" (without the underscore) on return to avoid incurring issues with Pydantic.
A:
First of all, if we had some examples of your code, this would be much easier. I can only assume that you are not mapping your MongoDb collection data to your Pydantic BaseModel correctly.
Read this:
MongoDB stores data as BSON. FastAPI encodes and decodes data as JSON strings. BSON has support for additional non-JSON-native data types, including ObjectId which can't be directly encoded as JSON. Because of this, we convert ObjectIds to strings before storing them as the _id.
I want to draw attention to the id field on this model. MongoDB uses _id, but in Python, underscores at the start of attributes have special meaning. If you have an attribute on your model that starts with an underscore, pydantic—the data validation framework used by FastAPI—will assume that it is a private variable, meaning you will not be able to assign it a value! To get around this, we name the field id but give it an alias of _id. You also need to set allow_population_by_field_name to True in the model's Config class.
Here is a working example:
First create the BaseModel:
class PyObjectId(ObjectId):
""" Custom Type for reading MongoDB IDs """
@classmethod
def __get_validators__(cls):
yield cls.validate
@classmethod
def validate(cls, v):
if not ObjectId.is_valid(v):
raise ValueError("Invalid object_id")
return ObjectId(v)
@classmethod
def __modify_schema__(cls, field_schema):
field_schema.update(type="string")
class Student(BaseModel):
id: PyObjectId = Field(default_factory=PyObjectId, alias="_id")
first_name: str
last_name: str
class Config:
allow_population_by_field_name = True
arbitrary_types_allowed = True
json_encoders = {ObjectId: str}
Now just unpack everything:
async def get_student(student_id) -> Student:
data = await collection.find_one({'_id': student_id})
if data is None:
raise HTTPException(status_code=404, detail='Student not found.')
student: Student = Student(**data)
return student
A:
use db.collection.find(ObjectId:"12348901384918")
here db.collection is database name and use double quotes for the string .
A:
I was trying to iterate through all the documents and what worked for me was this solution https://github.com/tiangolo/fastapi/issues/1515#issuecomment-782835977
These lines just needed to be added after the child of ObjectID class. An example is given in the following link.
https://github.com/tiangolo/fastapi/issues/1515#issuecomment-782838556
A:
Use the response model inside app decorator Here is the sample example
from pydantic import BaseModel
class Todo(BaseModel):
title:str
details:str
main.py
@app.get("/{title}",response_model=Todo)
async def get_todo(title:str):
response=await fetch_one_todo(title)
if not response:
raise
HTTPException(status_code=status.HTTP_404_NOT_FOUND,detail='not found')
return response
A:
I had this issue until I upgraded from mongodb version 5.0.9 to version 6.0.0 so mongodb made some changes on their end to handle this if you have the ability to upgrade! I ran into this issue when creating a test server and when I created a new test server that was 6.0.0, it fixed the error.
|
"ObjectId' object is not iterable" error, while fetching data from MongoDB Atlas
|
Okay, so pardon me if I don't make much sense. I face this 'ObjectId' object is not iterable whenever I run the collections.find() functions. Going through the answers here, I'm not sure where to start. I'm new to programming, please bear with me.
Every time I hit the route which is supposed to fetch me data from Mongodb, I getValueError: [TypeError("'ObjectId' object is not iterable"), TypeError('vars() argument must have __dict__ attribute')].
Help
|
[
"Exclude the \"_id\" from the output.\nresult = collection.find_one({'OpportunityID': oppid}, {'_id': 0})\n\n",
"I was having a similar problem to this myself. Not having seen your code I am guessing the traceback similarly traces the error to FastAPI/Starlette not being able to process the \"_id\" field - what you will therefore need to do is change the \"_id\" field in the results from an ObjectId to a string type and rename the field to \"id\" (without the underscore) on return to avoid incurring issues with Pydantic.\n",
"First of all, if we had some examples of your code, this would be much easier. I can only assume that you are not mapping your MongoDb collection data to your Pydantic BaseModel correctly.\nRead this:\nMongoDB stores data as BSON. FastAPI encodes and decodes data as JSON strings. BSON has support for additional non-JSON-native data types, including ObjectId which can't be directly encoded as JSON. Because of this, we convert ObjectIds to strings before storing them as the _id.\nI want to draw attention to the id field on this model. MongoDB uses _id, but in Python, underscores at the start of attributes have special meaning. If you have an attribute on your model that starts with an underscore, pydantic—the data validation framework used by FastAPI—will assume that it is a private variable, meaning you will not be able to assign it a value! To get around this, we name the field id but give it an alias of _id. You also need to set allow_population_by_field_name to True in the model's Config class.\nHere is a working example:\nFirst create the BaseModel:\nclass PyObjectId(ObjectId):\n \"\"\" Custom Type for reading MongoDB IDs \"\"\"\n @classmethod\n def __get_validators__(cls):\n yield cls.validate\n\n @classmethod\n def validate(cls, v):\n if not ObjectId.is_valid(v):\n raise ValueError(\"Invalid object_id\")\n return ObjectId(v)\n\n @classmethod\n def __modify_schema__(cls, field_schema):\n field_schema.update(type=\"string\")\n\nclass Student(BaseModel):\n id: PyObjectId = Field(default_factory=PyObjectId, alias=\"_id\")\n first_name: str\n last_name: str\n\n class Config:\n allow_population_by_field_name = True\n arbitrary_types_allowed = True\n json_encoders = {ObjectId: str}\n\nNow just unpack everything:\nasync def get_student(student_id) -> Student:\n data = await collection.find_one({'_id': student_id})\n if data is None:\n raise HTTPException(status_code=404, detail='Student not found.')\n student: Student = Student(**data)\n return student\n\n",
"use db.collection.find(ObjectId:\"12348901384918\")\nhere db.collection is database name and use double quotes for the string .\n",
"I was trying to iterate through all the documents and what worked for me was this solution https://github.com/tiangolo/fastapi/issues/1515#issuecomment-782835977\nThese lines just needed to be added after the child of ObjectID class. An example is given in the following link.\nhttps://github.com/tiangolo/fastapi/issues/1515#issuecomment-782838556\n",
"Use the response model inside app decorator Here is the sample example\nfrom pydantic import BaseModel\n class Todo(BaseModel):\n title:str\n details:str\n\nmain.py\n@app.get(\"/{title}\",response_model=Todo)\n async def get_todo(title:str):\n response=await fetch_one_todo(title)\n if not response:\n raise \n HTTPException(status_code=status.HTTP_404_NOT_FOUND,detail='not found')\n return response\n\n",
"I had this issue until I upgraded from mongodb version 5.0.9 to version 6.0.0 so mongodb made some changes on their end to handle this if you have the ability to upgrade! I ran into this issue when creating a test server and when I created a new test server that was 6.0.0, it fixed the error.\n"
] |
[
8,
8,
7,
1,
1,
1,
0
] |
[] |
[] |
[
"api",
"fastapi",
"mongodb",
"objectid",
"python"
] |
stackoverflow_0063881516_api_fastapi_mongodb_objectid_python.txt
|
Q:
Split one row into multiple rows of 6 hours data based on 15 mins time interval in pandas data frame
I want Split one row into multiple rows of 6 hours data based on 15 mins time interval in pandas data frame
start_time end_time
0 2022-08-22 00:15:00 2022-08-22 06:15:00
I have tried one hrs time split and used below code
result['start_time'] = result.apply(lambda d: pd.date_range(d['start_time'],
d['end_time'],
freq='h')[:-1],
axis=1)
and it worked for me to get this
result["start_time"][0]
Output:
DatetimeIndex(['2022-08-22 00:15:00', '2022-08-22 01:15:00',
'2022-08-22 02:15:00', '2022-08-22 03:15:00',
'2022-08-22 04:15:00', '2022-08-22 05:15:00'],
dtype='datetime64[ns]', freq='H')
now i want the frequency for 15 mins time interval, so it should give me 24 timestamp
A:
Try:
15T instead of h
result['start_time'] = result.apply(lambda d: pd.date_range(d['start_time'],
d['end_time'],
freq='15T')[:-1],
axis=1)
OUTPUT:
DatetimeIndex(['2022-08-22 00:15:00', '2022-08-22 00:30:00',
'2022-08-22 00:45:00', '2022-08-22 01:00:00',
'2022-08-22 01:15:00', '2022-08-22 01:30:00',
'2022-08-22 01:45:00', '2022-08-22 02:00:00',
'2022-08-22 02:15:00', '2022-08-22 02:30:00',
'2022-08-22 02:45:00', '2022-08-22 03:00:00',
'2022-08-22 03:15:00', '2022-08-22 03:30:00',
'2022-08-22 03:45:00', '2022-08-22 04:00:00',
'2022-08-22 04:15:00', '2022-08-22 04:30:00',
'2022-08-22 04:45:00', '2022-08-22 05:00:00',
'2022-08-22 05:15:00', '2022-08-22 05:30:00',
'2022-08-22 05:45:00', '2022-08-22 06:00:00'],
dtype='datetime64[ns]', freq='15T')
As expected - you get your 24 timestamps
A:
from datetime import timedelta
df = pd.DataFrame({'start_time': ['2022-08-22 00:15:00'],'end_time': ['2022-08-22 06:15:00']})
df['start_time'] = pd.to_datetime(df['start_time'])
df['end_time'] = pd.to_datetime(df['end_time'])
df['start_time'] = df['start_time'].dt.strftime('%Y-%m-%d %H:%M:%S')
df['end_time'] = df['end_time'].dt.strftime('%Y-%m-%d %H:%M:%S')
# start_time end_time
# 0 2022-08-22 00:15:00 2022-08-22 06:15:00
new_df = pd.date_range(start=df['start_time'][0], end=df['end_time'][0], freq='15min')[:-1]
result_df = pd.DataFrame({'start_time': new_df, 'end_time': new_df + timedelta(minutes=15)})
output:
> result_df
start_time end_time
0 2022-08-22 00:15:00 2022-08-22 00:30:00
1 2022-08-22 00:30:00 2022-08-22 00:45:00
2 2022-08-22 00:45:00 2022-08-22 01:00:00
3 2022-08-22 01:00:00 2022-08-22 01:15:00
4 2022-08-22 01:15:00 2022-08-22 01:30:00
5 2022-08-22 01:30:00 2022-08-22 01:45:00
6 2022-08-22 01:45:00 2022-08-22 02:00:00
7 2022-08-22 02:00:00 2022-08-22 02:15:00
8 2022-08-22 02:15:00 2022-08-22 02:30:00
9 2022-08-22 02:30:00 2022-08-22 02:45:00
10 2022-08-22 02:45:00 2022-08-22 03:00:00
11 2022-08-22 03:00:00 2022-08-22 03:15:00
12 2022-08-22 03:15:00 2022-08-22 03:30:00
13 2022-08-22 03:30:00 2022-08-22 03:45:00
14 2022-08-22 03:45:00 2022-08-22 04:00:00
15 2022-08-22 04:00:00 2022-08-22 04:15:00
16 2022-08-22 04:15:00 2022-08-22 04:30:00
17 2022-08-22 04:30:00 2022-08-22 04:45:00
18 2022-08-22 04:45:00 2022-08-22 05:00:00
19 2022-08-22 05:00:00 2022-08-22 05:15:00
20 2022-08-22 05:15:00 2022-08-22 05:30:00
21 2022-08-22 05:30:00 2022-08-22 05:45:00
22 2022-08-22 05:45:00 2022-08-22 06:00:00
23 2022-08-22 06:00:00 2022-08-22 06:15:00
|
Split one row into multiple rows of 6 hours data based on 15 mins time interval in pandas data frame
|
I want Split one row into multiple rows of 6 hours data based on 15 mins time interval in pandas data frame
start_time end_time
0 2022-08-22 00:15:00 2022-08-22 06:15:00
I have tried one hrs time split and used below code
result['start_time'] = result.apply(lambda d: pd.date_range(d['start_time'],
d['end_time'],
freq='h')[:-1],
axis=1)
and it worked for me to get this
result["start_time"][0]
Output:
DatetimeIndex(['2022-08-22 00:15:00', '2022-08-22 01:15:00',
'2022-08-22 02:15:00', '2022-08-22 03:15:00',
'2022-08-22 04:15:00', '2022-08-22 05:15:00'],
dtype='datetime64[ns]', freq='H')
now i want the frequency for 15 mins time interval, so it should give me 24 timestamp
|
[
"Try:\n15T instead of h\nresult['start_time'] = result.apply(lambda d: pd.date_range(d['start_time'],\n d['end_time'], \n freq='15T')[:-1], \n axis=1) \n\n\nOUTPUT:\nDatetimeIndex(['2022-08-22 00:15:00', '2022-08-22 00:30:00',\n '2022-08-22 00:45:00', '2022-08-22 01:00:00',\n '2022-08-22 01:15:00', '2022-08-22 01:30:00',\n '2022-08-22 01:45:00', '2022-08-22 02:00:00',\n '2022-08-22 02:15:00', '2022-08-22 02:30:00',\n '2022-08-22 02:45:00', '2022-08-22 03:00:00',\n '2022-08-22 03:15:00', '2022-08-22 03:30:00',\n '2022-08-22 03:45:00', '2022-08-22 04:00:00',\n '2022-08-22 04:15:00', '2022-08-22 04:30:00',\n '2022-08-22 04:45:00', '2022-08-22 05:00:00',\n '2022-08-22 05:15:00', '2022-08-22 05:30:00',\n '2022-08-22 05:45:00', '2022-08-22 06:00:00'],\n dtype='datetime64[ns]', freq='15T')\n\nAs expected - you get your 24 timestamps\n",
"from datetime import timedelta\n\ndf = pd.DataFrame({'start_time': ['2022-08-22 00:15:00'],'end_time': ['2022-08-22 06:15:00']})\ndf['start_time'] = pd.to_datetime(df['start_time'])\ndf['end_time'] = pd.to_datetime(df['end_time'])\ndf['start_time'] = df['start_time'].dt.strftime('%Y-%m-%d %H:%M:%S')\ndf['end_time'] = df['end_time'].dt.strftime('%Y-%m-%d %H:%M:%S')\n\n# start_time end_time\n# 0 2022-08-22 00:15:00 2022-08-22 06:15:00\n\nnew_df = pd.date_range(start=df['start_time'][0], end=df['end_time'][0], freq='15min')[:-1]\nresult_df = pd.DataFrame({'start_time': new_df, 'end_time': new_df + timedelta(minutes=15)})\n\noutput:\n> result_df\n\n start_time end_time\n0 2022-08-22 00:15:00 2022-08-22 00:30:00\n1 2022-08-22 00:30:00 2022-08-22 00:45:00\n2 2022-08-22 00:45:00 2022-08-22 01:00:00\n3 2022-08-22 01:00:00 2022-08-22 01:15:00\n4 2022-08-22 01:15:00 2022-08-22 01:30:00\n5 2022-08-22 01:30:00 2022-08-22 01:45:00\n6 2022-08-22 01:45:00 2022-08-22 02:00:00\n7 2022-08-22 02:00:00 2022-08-22 02:15:00\n8 2022-08-22 02:15:00 2022-08-22 02:30:00\n9 2022-08-22 02:30:00 2022-08-22 02:45:00\n10 2022-08-22 02:45:00 2022-08-22 03:00:00\n11 2022-08-22 03:00:00 2022-08-22 03:15:00\n12 2022-08-22 03:15:00 2022-08-22 03:30:00\n13 2022-08-22 03:30:00 2022-08-22 03:45:00\n14 2022-08-22 03:45:00 2022-08-22 04:00:00\n15 2022-08-22 04:00:00 2022-08-22 04:15:00\n16 2022-08-22 04:15:00 2022-08-22 04:30:00\n17 2022-08-22 04:30:00 2022-08-22 04:45:00\n18 2022-08-22 04:45:00 2022-08-22 05:00:00\n19 2022-08-22 05:00:00 2022-08-22 05:15:00\n20 2022-08-22 05:15:00 2022-08-22 05:30:00\n21 2022-08-22 05:30:00 2022-08-22 05:45:00\n22 2022-08-22 05:45:00 2022-08-22 06:00:00\n23 2022-08-22 06:00:00 2022-08-22 06:15:00\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"dataframe",
"datetime",
"pandas",
"python"
] |
stackoverflow_0074580249_dataframe_datetime_pandas_python.txt
|
Q:
Django Rest API from Database
I have 2 APIs from my existing project. Where One provides the latest blog posts and another one provides sorting details. The 2nd API (sorting) gives blog posts ID and an ordering number, which should be in the 1st,2nd,3rd...n position. If I filter in the first API with that given ID I can get the blog post details.
How can I create a Django REST API from Database? or an API merging from that 2 APIs? Any tutorial or reference which might help me?
Frist API Response:
{
"count": 74,
"next": "https://cms.example.com/api/v2/stories/?page=2",
"previous": null,
"results": [
{
"id": 111,
"meta": {
"type": "blog.CreateStory",
"seo_title": "",
"search_description": "",
"first_published_at": "2022-10-09T07:29:17.029746Z"
},
"title": "A Test Blog Post"
},
{
"id": 105,
"meta": {
"type": "blog.CreateStory",
"seo_title": "",
"search_description": "",
"first_published_at": "2022-10-08T04:45:32.165072Z"
},
"title": "Blog Story 2"
},
2nd API Response
[
{
"featured_item": 1,
"sort_order": 0,
"featured_page": 105
},
{
"featured_item": 1,
"sort_order": 1,
"featured_page": 90
},
Here I want to create another API that will provide more details about sorting for example it will sort like this https://cms.example.com/api/v2/stories/105 and catch Title, Image & Excerpt and If there is no data from Sorting details it will show the first API's response by default.
A:
After searching, I found that you can make API from Database. In setting you need to set the database credentials and then need to create a class inside your models.py and inside class's meta you need to set meta name to db_table and then create serializers.py and views.py as you create REST API.
class SortAPI(models.Model):
featured_item_id = models.IntegerField()
sort_order = models.IntegerField()
title=models.TextField()
first_published_at=models.DateTimeField()
alternative_title= models.TextField()
excerpt=models.TextField()
sub_heading=models.TextField()
news_slug=models.TextField()
img_title=models.TextField()
img_url=models.TextField()
img_width=models.IntegerField()
img_height=models.IntegerField()
class Meta:
db_table = 'view_featured'
|
Django Rest API from Database
|
I have 2 APIs from my existing project. Where One provides the latest blog posts and another one provides sorting details. The 2nd API (sorting) gives blog posts ID and an ordering number, which should be in the 1st,2nd,3rd...n position. If I filter in the first API with that given ID I can get the blog post details.
How can I create a Django REST API from Database? or an API merging from that 2 APIs? Any tutorial or reference which might help me?
Frist API Response:
{
"count": 74,
"next": "https://cms.example.com/api/v2/stories/?page=2",
"previous": null,
"results": [
{
"id": 111,
"meta": {
"type": "blog.CreateStory",
"seo_title": "",
"search_description": "",
"first_published_at": "2022-10-09T07:29:17.029746Z"
},
"title": "A Test Blog Post"
},
{
"id": 105,
"meta": {
"type": "blog.CreateStory",
"seo_title": "",
"search_description": "",
"first_published_at": "2022-10-08T04:45:32.165072Z"
},
"title": "Blog Story 2"
},
2nd API Response
[
{
"featured_item": 1,
"sort_order": 0,
"featured_page": 105
},
{
"featured_item": 1,
"sort_order": 1,
"featured_page": 90
},
Here I want to create another API that will provide more details about sorting for example it will sort like this https://cms.example.com/api/v2/stories/105 and catch Title, Image & Excerpt and If there is no data from Sorting details it will show the first API's response by default.
|
[
"After searching, I found that you can make API from Database. In setting you need to set the database credentials and then need to create a class inside your models.py and inside class's meta you need to set meta name to db_table and then create serializers.py and views.py as you create REST API.\nclass SortAPI(models.Model):\n featured_item_id = models.IntegerField()\n sort_order = models.IntegerField()\n title=models.TextField()\n first_published_at=models.DateTimeField()\n alternative_title= models.TextField()\n excerpt=models.TextField()\n sub_heading=models.TextField()\n news_slug=models.TextField()\n img_title=models.TextField()\n img_url=models.TextField()\n img_width=models.IntegerField()\n img_height=models.IntegerField()\n\n class Meta:\n db_table = 'view_featured'\n\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_rest_framework",
"django_views",
"python"
] |
stackoverflow_0074003608_django_django_rest_framework_django_views_python.txt
|
Q:
ParserError: ' ' expected after '"' in python pandas/dask
Hi I'm using 3GB txt file and want to change it to CSV but it gives error_bad_lines
ParserError: ' ' expected after '"'
Code I am using
df1 = df.read_csv("path\\logs.txt", delimiter = "\t", encoding = 'cp437',engine="python")
df1.to_csv("C:\\Data\\log1.csv",quotechar='"',error_bad_lines=False, header=None, on_bad_lines='skip')
A:
The following code locates unwanted quotation marks (' and ") between each record or tab, and replaces it with nothing.
It then replaces the tab (\t) with a comma (,).
This script uses regex to locate the unwanted quotation marks.
import re
# Use regex to locate unwanted quotation marks
pattern = re.compile(r"(?!^|\"$)[\"\']")
new_file = open("C:\\Data\\log1.csv", "a")
# Read the file
with open("path\\logs.txt", "r") as f:
for line in f.readlines():
new_l = ""
for l in line.split('\t'):
# Replace the unwanted quotation marks
l = re.sub(pattern, "", l)
if new_l == "":
new_l = new_l + l
else:
new_l = new_l + ',' + l
# Write the line to the new file
new_file.write(new_l)
new_file.close()
The reason you are seeing the issue that you are seeing, is that you have an unwanted quotation mark within the record. For example:
"The"\t"quick brown"" fox "jumps over the"\t"lazy dog"
A:
Add on_bad_lines=‘warn’ to your read_csv. Looks like there is some wrong line.
|
ParserError: ' ' expected after '"' in python pandas/dask
|
Hi I'm using 3GB txt file and want to change it to CSV but it gives error_bad_lines
ParserError: ' ' expected after '"'
Code I am using
df1 = df.read_csv("path\\logs.txt", delimiter = "\t", encoding = 'cp437',engine="python")
df1.to_csv("C:\\Data\\log1.csv",quotechar='"',error_bad_lines=False, header=None, on_bad_lines='skip')
|
[
"The following code locates unwanted quotation marks (' and \") between each record or tab, and replaces it with nothing.\nIt then replaces the tab (\\t) with a comma (,).\nThis script uses regex to locate the unwanted quotation marks.\nimport re\n\n# Use regex to locate unwanted quotation marks\npattern = re.compile(r\"(?!^|\\\"$)[\\\"\\']\")\n\nnew_file = open(\"C:\\\\Data\\\\log1.csv\", \"a\")\n\n# Read the file\nwith open(\"path\\\\logs.txt\", \"r\") as f:\n for line in f.readlines():\n new_l = \"\"\n for l in line.split('\\t'):\n \n # Replace the unwanted quotation marks\n l = re.sub(pattern, \"\", l)\n if new_l == \"\":\n new_l = new_l + l\n else:\n new_l = new_l + ',' + l\n \n # Write the line to the new file \n new_file.write(new_l)\n\nnew_file.close()\n\nThe reason you are seeing the issue that you are seeing, is that you have an unwanted quotation mark within the record. For example:\n\"The\"\\t\"quick brown\"\" fox \"jumps over the\"\\t\"lazy dog\"\n\n",
"Add on_bad_lines=‘warn’ to your read_csv. Looks like there is some wrong line.\n"
] |
[
0,
0
] |
[] |
[] |
[
"dask",
"pandas",
"python"
] |
stackoverflow_0074580371_dask_pandas_python.txt
|
Q:
How to solve a delay differential equation numerically
I would like to compute the Buchstab function numerically. It is defined by the delay differential equation:
How can I compute this numerically efficiently?
A:
To get a general feeling of how DDE integration works, I'll give some code, based on the low-order Heun method (to avoid uninteresting details while still being marginally useful).
In the numerical integration the previous values are treated as a function of time like any other time-depending term. As there is not really a functional expression for it, the solution so-far will be used as a function table for interpolation. The interpolation error order should be as high as the error order of the ODE integrator, which is easy to arrange for low-order methods, but will require extra effort for higher order methods. The solve_ivp stepper classes provide such a "dense output" interpolation per step that can be assembled into a function for the currently existing integration interval.
So after the theory the praxis. Select step size h=0.05, convert the given history function into the start of the solution function table
u=1
u_arr = []
w_arr = []
while u<2+0.5*h:
u_arr.append(u)
w_arr.append(1/u)
u += h
Then solve the equation, for the delayed value use interpolation in the function table, here using numpy.interp. There are other functions with more options in `scipy.interpolate.
Note that h needs to be smaller than the smallest delay, so that the delayed values are from a previous step. Which is the case here.
u = u_arr[-1]
w = w_arr[-1]
while u < 4:
k1 = (-w + np.interp(u-1,u_arr,w_arr))/u
us, ws = u+h, w+h*k1
k2 = (-ws + np.interp(us-1,u_arr,w_arr))/us
u,w = us, w+0.5*h*(k1+k2)
u_arr.append(us)
w_arr.append(ws)
Now the numerical approximation can be further processed, for instance plotted.
plt.plot(u_arr,w_arr); plt.grid(); plt.show()
|
How to solve a delay differential equation numerically
|
I would like to compute the Buchstab function numerically. It is defined by the delay differential equation:
How can I compute this numerically efficiently?
|
[
"To get a general feeling of how DDE integration works, I'll give some code, based on the low-order Heun method (to avoid uninteresting details while still being marginally useful).\nIn the numerical integration the previous values are treated as a function of time like any other time-depending term. As there is not really a functional expression for it, the solution so-far will be used as a function table for interpolation. The interpolation error order should be as high as the error order of the ODE integrator, which is easy to arrange for low-order methods, but will require extra effort for higher order methods. The solve_ivp stepper classes provide such a \"dense output\" interpolation per step that can be assembled into a function for the currently existing integration interval.\n\nSo after the theory the praxis. Select step size h=0.05, convert the given history function into the start of the solution function table\nu=1\nu_arr = []\nw_arr = []\nwhile u<2+0.5*h:\n u_arr.append(u)\n w_arr.append(1/u)\n u += h\n\nThen solve the equation, for the delayed value use interpolation in the function table, here using numpy.interp. There are other functions with more options in `scipy.interpolate.\nNote that h needs to be smaller than the smallest delay, so that the delayed values are from a previous step. Which is the case here.\nu = u_arr[-1]\nw = w_arr[-1]\nwhile u < 4:\n k1 = (-w + np.interp(u-1,u_arr,w_arr))/u\n us, ws = u+h, w+h*k1\n k2 = (-ws + np.interp(us-1,u_arr,w_arr))/us\n u,w = us, w+0.5*h*(k1+k2)\n u_arr.append(us)\n w_arr.append(ws)\n\nNow the numerical approximation can be further processed, for instance plotted.\nplt.plot(u_arr,w_arr); plt.grid(); plt.show()\n\n\n"
] |
[
1
] |
[] |
[] |
[
"differential_equations",
"math",
"number_theory",
"python"
] |
stackoverflow_0074578027_differential_equations_math_number_theory_python.txt
|
Q:
Winshell error win32con not found
Traceback (most recent call last):
File "C:/Users/owner/Desktop/2/test2.py", line 1, in <module>
import os, winshell
File "C:\py35\lib\site-packages\winshell.py", line 30, in <module>
import win32con
ImportError: No module named 'win32con'
I've seen:
http://error.news/question/6131746/why-does-pip-install-winshell-not-work-on-python-v342/
But I installed pywin32 64 bit separately and done it via the exe:
https://drive.google.com/file/d/0B2FZnKhR7OOJZ1hYZER2WUwyUzA/view?usp=sharing
So how about: Why does pip install winshell not work on Python v3.4.2?
Err, no. I Installed it separtely.
I then went to see: What's win32con module in python? Where can I find it?
I need to know: What do I need to do to get winshell to work. I have manually installed pywin32 (64bit), I ran the exe for pywin32 (64bit) and completed it successfully, I then proceeded to CMD and did:
cd c:\py35\scripts
pip install winshell
The install completed successfully. However, importing winshell still doesn't work!
A:
IT WORKED AT LAST
What I did:
Run CMD with elevated privileges and commands:
cd pathto\pythondirectory\scripts
pywin32_postinstall.py -install
Turns out that this would not have run and the DLLs would not have copied over correctly if you didn't have full admin.
Also a very notable page: https://blogs.msdn.microsoft.com/pythonengineering/2016/04/11/unable-to-find-vcvarsall-bat/
A:
There is a bug in the dependencies list for the winshell package. The pypiwin32 package is required. This bug has already been reported to the maintainers of the winshell package, but unfortunately it appears that the maintainers have stopped supporting it. I used the winshell package from http://www.lfd.uci.edu/~gohlke/pythonlibs/#winshell since it is newer (ver. 0.6.4) than the one on PyPI (ver. 0.6). See What's win32con module in python? Where can I find it? for more information and helpful links.
pip install pypiwin32
[Download Gohlke's package to a local folder, C:\downloads\new in this example.]
pip install C:\downloads\new\winshell-0.6.4-py2.py3-none-any.whl
A:
While some people say to do various things in an elevated command prompt, what worked for me was to just run pip install pywin32 in a normal command prompt.
|
Winshell error win32con not found
|
Traceback (most recent call last):
File "C:/Users/owner/Desktop/2/test2.py", line 1, in <module>
import os, winshell
File "C:\py35\lib\site-packages\winshell.py", line 30, in <module>
import win32con
ImportError: No module named 'win32con'
I've seen:
http://error.news/question/6131746/why-does-pip-install-winshell-not-work-on-python-v342/
But I installed pywin32 64 bit separately and done it via the exe:
https://drive.google.com/file/d/0B2FZnKhR7OOJZ1hYZER2WUwyUzA/view?usp=sharing
So how about: Why does pip install winshell not work on Python v3.4.2?
Err, no. I Installed it separtely.
I then went to see: What's win32con module in python? Where can I find it?
I need to know: What do I need to do to get winshell to work. I have manually installed pywin32 (64bit), I ran the exe for pywin32 (64bit) and completed it successfully, I then proceeded to CMD and did:
cd c:\py35\scripts
pip install winshell
The install completed successfully. However, importing winshell still doesn't work!
|
[
"IT WORKED AT LAST\nWhat I did:\nRun CMD with elevated privileges and commands:\ncd pathto\\pythondirectory\\scripts\npywin32_postinstall.py -install\n\nTurns out that this would not have run and the DLLs would not have copied over correctly if you didn't have full admin.\nAlso a very notable page: https://blogs.msdn.microsoft.com/pythonengineering/2016/04/11/unable-to-find-vcvarsall-bat/\n",
"There is a bug in the dependencies list for the winshell package. The pypiwin32 package is required. This bug has already been reported to the maintainers of the winshell package, but unfortunately it appears that the maintainers have stopped supporting it. I used the winshell package from http://www.lfd.uci.edu/~gohlke/pythonlibs/#winshell since it is newer (ver. 0.6.4) than the one on PyPI (ver. 0.6). See What's win32con module in python? Where can I find it? for more information and helpful links.\npip install pypiwin32\n[Download Gohlke's package to a local folder, C:\\downloads\\new in this example.]\npip install C:\\downloads\\new\\winshell-0.6.4-py2.py3-none-any.whl\n\n",
"While some people say to do various things in an elevated command prompt, what worked for me was to just run pip install pywin32 in a normal command prompt.\n"
] |
[
3,
3,
0
] |
[] |
[] |
[
"pip",
"python",
"python_3.x",
"python_winshell"
] |
stackoverflow_0033591093_pip_python_python_3.x_python_winshell.txt
|
Q:
How to read docx files from azure blob using Python
How to read docx files from azure blob using Python?
I use the following code, but finally, blob_content has all unreadable characters. This code works fine for txt files but not for MS Word Documents (*.docx).
Please help if you have any solution.
blob_service_client_instance = BlobServiceClient(account_url=STORAGEACCOUNTURL, credential=STORAGEACCOUNTKEY)
blob_client_instance = blob_service_client_instance.get_blob_client(container_name, blob_name, snapshot=None)
blob_download = blob_client_instance.download_blob()
blob_content = blob_download.readall().decode('utf-8')
A:
I tried in my environment and got below results:
Initially I tried the piece of code to read the docx file from azure blob storage through visual studio code.
In portal, I have a docx file in azure blob storage
from azure.storage.blob import BlobServiceClient
client=BlobServiceClient.from_connection_string("<Connection string>")
serviceclient = client.get_container_client("test")
bc = serviceclient.get_blob_client(blob="sample.docx")
with open("sample.docx", 'wb') as file:
data = bc.download_blob()
file.write(data.readall())
The above code worked and downloaded the docx file from azure blob storage. when I try to open the file it is source code editor not in docx code editor.
Console:
After I used piece of code to read a docx file from which is downloaded from azure blob Storage.
Code:
import docx
doc = docx.Document("<path of the downloaded file >")
all_paras = doc.paragraphs
for para in all_paras:
print(para.text)
Console:
After I executed the above code, I am able to read the docx file successfully.
|
How to read docx files from azure blob using Python
|
How to read docx files from azure blob using Python?
I use the following code, but finally, blob_content has all unreadable characters. This code works fine for txt files but not for MS Word Documents (*.docx).
Please help if you have any solution.
blob_service_client_instance = BlobServiceClient(account_url=STORAGEACCOUNTURL, credential=STORAGEACCOUNTKEY)
blob_client_instance = blob_service_client_instance.get_blob_client(container_name, blob_name, snapshot=None)
blob_download = blob_client_instance.download_blob()
blob_content = blob_download.readall().decode('utf-8')
|
[
"I tried in my environment and got below results:\nInitially I tried the piece of code to read the docx file from azure blob storage through visual studio code.\nIn portal, I have a docx file in azure blob storage\n\nfrom azure.storage.blob import BlobServiceClient\n\nclient=BlobServiceClient.from_connection_string(\"<Connection string>\")\nserviceclient = client.get_container_client(\"test\")\nbc = serviceclient.get_blob_client(blob=\"sample.docx\")\n with open(\"sample.docx\", 'wb') as file:\ndata = bc.download_blob()\nfile.write(data.readall())\n\nThe above code worked and downloaded the docx file from azure blob storage. when I try to open the file it is source code editor not in docx code editor.\nConsole:\n\nAfter I used piece of code to read a docx file from which is downloaded from azure blob Storage.\nCode:\nimport docx\ndoc = docx.Document(\"<path of the downloaded file >\")\nall_paras = doc.paragraphs\nfor para in all_paras:\nprint(para.text)\n\nConsole:\nAfter I executed the above code, I am able to read the docx file successfully.\n\n"
] |
[
0
] |
[] |
[] |
[
"azure",
"azure_blob_storage",
"ms_word",
"python"
] |
stackoverflow_0074571122_azure_azure_blob_storage_ms_word_python.txt
|
Q:
TensorFlow not found using pip
I'm trying to install TensorFlow using pip:
$ pip install tensorflow --user
Collecting tensorflow
Could not find a version that satisfies the requirement tensorflow (from versions: )
No matching distribution found for tensorflow
What am I doing wrong? So far I've used Python and pip with no issues.
A:
I found this to finally work.
python3 -m pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.12.0-py3-none-any.whl
Edit 1: This was tested on Windows (8, 8.1, 10), Mac and Linux. Change python3 to python according to your configuration. Change py3 to py2 in the url if you are using Python 2.x.
Edit 2: A list of different versions if someone needs: https://storage.googleapis.com/tensorflow
Edit 3: A list of urls for the available wheel packages is available here:
https://www.tensorflow.org/install/pip#package-location
A:
You need a 64-bit version of Python and in your case are using a 32-bit version. As of now Tensorflow only supports 64-bit versions of Python 3.5.x and 3.8.x on Windows. See the install docs to see what is currently supported
To check which version of Python you are running, type python or python3 to start the interpreter, and then type import struct;print(struct.calcsize("P") * 8) and that will print either 32 or 64 to tell you which bit version of Python you are running.
From comments:
To download a different version of Python for Windows, go to python.org/downloads/windows and scroll down until you see the version you want that ends in a "64". That will be the 64 bit version that should work with tensorflow
A:
You need to use the right version of Python and pip.
On Windows 10, with Python 3.6.X version I was facing the same problem, then after checking deliberately I noticed I had the Python-32 bit installation on my 64 bit machine. Remember TensorFlow is only compatible with 64bit installation of Python, not the 32 bit version of Python
If we download Python from python.org, the default installation would be 32 bit. So we have to download the 64 bit installer manually to install Python 64 bit. And then add below to PATH environment.
C:\Users\AppData\Local\Programs\Python\Python36
C:\Users\AppData\Local\Programs\Python\Python36\Scripts
Then run gpupdate /Force on command prompt. If the Python command doesn't work for 64 bit then restart your machine.
Then run python on command prompt. It should show 64 bit.
C:\Users\YOURNAME>python
Python 3.6.3 (v3.6.3:2c5fed8, Oct 3 2017, 18:11:49) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Then run below command to install tensorflow CPU version (recommended)
pip3 install --upgrade tensorflow
October 2020 update:
Tensorflow now supports Python 3.5.x through Python 3.8.x, but you still have to use a 64-bit version.
If you need to run multiple versions of Python on the same machine, you can use a virtual environment to help manage them.
A:
From tensorflow website: "You will need pip version 8.1 or later for the following commands to work". Run this command to upgrade your pip, then try install tensorflow again:
pip install --upgrade pip
A:
If you are trying to install it on a windows machine you need to have a 64-bit version of python 3.5. This is the only way to actually install it. From the website:
TensorFlow supports only 64-bit Python 3.5 on Windows. We have tested the pip packages with the following distributions of Python:
Python 3.5 from Anaconda
Python 3.5 from python.org.
You can download the proper version of python from here (make sure you grab one of the ones that says "Windows x86-64")
You should now be able to install with pip install tensorflow or python -m pip install tensorflow (make sure that you are using the right pip, from python3, if you have both python2 and python3 installed)
Remember to install Anaconda 3-5.2.0 as the latest version which is 3-5.3.0 have python version 3.7 which is not supported by Tensorflow.
A:
I figured out that TensorFlow 1.12.0 only works with Python version 3.5.2. I had Python 3.7 but that didn't work. So, I had to downgrade Python and then I could install TensorFlow to make it work.
To downgrade your python version from 3.7 to 3.6
conda install python=3.6.8
A:
Updated 11/28/2016: TensorFlow is now available in PyPI, starting with release 0.12. You can type
pip install tensorflow
...or...
pip install tensorflow-gpu
...to install the CPU-only or GPU-accelerated version of TensorFlow respectively.
Previous answer: TensorFlow is not yet in the PyPI repository, so you have to specify the URL to the appropriate "wheel file" for your operating system and Python version.
The full list of supported configurations is listed on the TensorFlow website, but for example, to install version 0.10 for Python 2.7 on Linux, using CPU only, you would type the following command:
$ pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0rc0-cp27-none-linux_x86_64.whl
A:
Install Python 3.5.x 64 bit amd version here. Make sure you add Python to your PATH variable. Then open a command prompt and type
python -m pip install --upgrade pip
should give you the following result :
Collecting pip
Using cached pip-9.0.1-py2.py3-none-any.whl
Installing collected packages: pip
Found existing installation: pip 7.1.2
Uninstalling pip-7.1.2:
Successfully uninstalled pip-7.1.2
Successfully installed pip-9.0.1
Now type
pip3 install --upgrade tensorflow
A:
I had the same problem and solved with this:
# Ubuntu/Linux 64-bit, CPU only, Python 2.7
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.12.1-cp27-none-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled, Python 2.7
# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.
# Mac OS X, CPU only, Python 2.7:
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.1-py2-none-any.whl
# Mac OS X, GPU enabled, Python 2.7:
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow_gpu-0.12.1-py2-none-any.whl
# Ubuntu/Linux 64-bit, CPU only, Python 3.4
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.12.1-cp34-cp34m-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.4
# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-0.12.1-cp34-cp34m-linux_x86_64.whl
# Ubuntu/Linux 64-bit, CPU only, Python 3.5
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.12.1-cp35-cp35m-linux_x86_64.whl
# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-0.12.1-cp35-cp35m-linux_x86_64.whl
# Mac OS X, CPU only, Python 3.4 or 3.5:
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.1-py3-none-any.whl
# Mac OS X, GPU enabled, Python 3.4 or 3.5:
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow_gpu-0.12.1-py3-none-any.whl
Plus:
# Python 2
(tensorflow)$ pip install --upgrade $TF_BINARY_URL
# Python 3
(tensorflow)$ pip3 install --upgrade $TF_BINARY_URL
Found on Docs.
UPDATE!
There are new links for new versions
For example, for installing tensorflow v1.0.0 in OSX you need to use:
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.0-py2-none-any.whl
instead of
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.1-py2-none-any.whl
A:
I had the same error when trying to install on my Mac (using Python 2.7). A similar solution to the one I'm giving here also seemed to work for Python 3 on Windows 8.1 according to a different answer on this page by Yash Kumar Verma
Solution
Step 1: go to The URL of the TensorFlow Python package section of the TensorFlow installation page and copy the URL of the relevant link for your Python installation.
Step 2: open a terminal/command prompt and run the following command:
pip install --upgrade [paste copied url link here]
So for me it was the following:
pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.2.0-py2-none-any.whl
Update (July 21 2017): I tried this with some others who were running on Windows machines with Python 3.6 and they had to change the line in Step 2 to:
python -m pip install [paste copied url link here]
Update (26 July 2018): For Python 3.6.2 (not 3.7 because it's in 3.6.2 in TF Documentation), you can also use pip3 install --upgrade [paste copied URL here] in Step 2.
A:
Try this:
export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.1-py3-none-any.whl
pip3 install --upgrade $TF_BINARY_URL
Source: https://www.tensorflow.org/get_started/os_setup (page no longer exists)
Update 2/23/17
Documentation moved to: https://www.tensorflow.org/install
A:
Install python by checking Add Python to Path
pip3 install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.0-py3-none-any.whl
This works for windows 10.0
A:
Try this, it should work:
python.exe -m pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.0-py3-none-any.whl
A:
I had the same problem. After uninstalling the 32-bit version of python and reinstalling the 64-bit version I tried reinstalling TensorFlow and it worked.
Link to TensorFlow guide: https://www.tensorflow.org/install/install_windows
A:
If you run into this issue recently (say, after Python 3.7 release in 2018), most likely this is caused by the lack of Python 3.7 support (yet) from the tensorflow side. Try using Python 3.6 instead if you don't mind. There are some tricks you can find from https://github.com/tensorflow/tensorflow/issues/20444, but use them at your own risk. I used the one harpone suggested - first downloaded the tensorflow wheel for Python 3.6 and then renamed it manually...
cp tensorflow-1.11.0-cp36-cp36m-linux_x86_64.whl tensorflow-1.11.0-cp37-cp37m-linux_x86_64.whl
pip install tensorflow-1.11.0-cp37-cp37m-linux_x86_64.whl
The good news is that there is a pull request for 3.7 support already. Hope it will be released soon.
A:
There are multiple groups of answers to this question. This answer aims to generalize one group of answers:
There may not be a version of TensorFlow that is compatible with your version of Python. This is particularly true if you're using a new release of Python. For example, there may be a delay between the release of a new version of Python and the release of TensorFlow for that version of Python.
In this case, I believe your options are to:
Upgrade or downgrade to a different version of Python. (Virtual environments are good for this, e.g. conda install python=3.6)
Select a specific version of tensorflow that is compatible with your version of python, e.g. if you're still using python3.4: pip install tensorflow==2.0
Compile TensorFlow from the source code.
Wait for a new release of TensorFlow which is compatible with your version of Python.
A:
as of today, if anyone else is wondering,
python >= 3.9 will cause the same issue
uninstall python 3.9, and install 3.8 , it should resolve it
A:
If you are using the Anaconda Python installation, pip install tensorflow will give the error stated above, shown below:
Collecting tensorflow
Could not find a version that satisfies the requirement tensorflow (from versions: )
No matching distribution found for tensorflow
According to the TensorFlow installation page, you will need to use the --ignore-installed flag when running pip install.
However, before this can be done see this link
to ensure the TF_BINARY_URL variable is set correctly in relation to the desired version of TensorFlow that you wish to install.
A:
For pyCharm users:
Check pip version:
pip3 -V
If pip is older than 9.0.1:
py -3 -m pip install --upgrade pip
Then:
py -3 -m pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.0-py3-none-any.whl
A:
If you're trying to install tensorflow in anaconda and it isn't working, then you may need to downgrade python version because only 3.6.x is currently supported while anaconda has the latest version.
check python version: python --version
if version > 3.6.x then follow step 3, otherwise stop, the problem may be somewhere else
conda search python
conda install python=3.6.6
Check version again: python --version
If version is correct, install tensorflow (step 7)
pip install tensorflow
A:
Unfortunately my reputation is to low to command underneath @Sujoy answer.
In their docs they claim to support python 3.6.
The link provided by @mayur shows that their is indeed only a python3.5 wheel package. This is my try to install tensorflow:
Microsoft Windows [Version 10.0.16299.371]
(c) 2017 Microsoft Corporation. All rights reserved.
C:\>python3 -m pip install --upgrade pip
Requirement already up-to-date: pip in d:\python\v3\lib\site-packages (10.0.0)
C:\>python3 -m pip -V
pip 10.0.0 from D:\Python\V3\lib\site-packages\pip (python 3.6)
C:\>python3 -m pip install --upgrade tensorflow
Collecting tensorflow
Could not find a version that satisfies the requirement tensorflow (from versions: )
No matching distribution found for tensorflow
while python 3.5 seems to install successfully. I would love to see a python3.6 version since they claim it should also work on python3.6.
Quoted :
"TensorFlow supports Python 3.5.x and 3.6.x on Windows. Note that Python 3 comes with the pip3 package manager, which is the program you'll use to install TensorFlow."
Source : https://www.tensorflow.org/install/install_windows
Python3.5 install :
Microsoft Windows [Version 10.0.16299.371]
(c) 2017 Microsoft Corporation. All rights reserved.
C:\>python3 -m pip install --upgrade pip
Requirement already up-to-date: pip in d:\python\v3\lib\site-packages (10.0.0)
C:\>python3 -m pip -V
pip 10.0.0 from D:\Python\V3_5\lib\site-packages\pip (python 3.5.2)
C:\>python3 -m pip install --upgrade tensorflow
Collecting tensorflow
Downloading
....
....
I hope i am terrible wrong here but if not ring a alarm bell
Edit:
A couple of posts below someone pointed out that the following command would work and it did.
python3 -m pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.0-py3-none-any.whl
Strange pip is not working
A:
Following these steps allows you to install tensorflow and keras:
Download Anaconda3-5.2.0 which comes with python 3.6 from https://repo.anaconda.com/archive/
Install Anaconda and open Anaconda Prompt and execute below commands
conda install jupyter
conda install scipy
pip install sklearn
pip install msgpack
pip install pandas
pip install pandas-datareader
pip install matplotlib
pip install pillow
pip install requests
pip install h5py
pip install tensorflow
pip install keras
A:
Tensorflow DOES NOT support python versions after 3.8 as of when I'm writing this at least (December 2020). Use this: https://www.tensorflow.org/install to check what python versions it supports, I just spent hours looking through these answers, took me way too long to realise that.
A:
This worked for me with Python 2.7 on Mac OS X Yosemite 10.10.5:
sudo pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
A:
Start Command Prompt with Administrative Permission
Enter following command python -m pip install --upgrade pip
Next Enter command pip install tensorflow
A:
update 2019:
for install the preview version of TensorFlow 2 in Google Colab you can use:
!wget https://developer.nvidia.com/compute/cuda/10.0/Prod/local_installers/cuda-repo-ubuntu1604-10-0-local-10.0.130-410.48_1.0-1_amd64 -O cuda-repo-ubuntu1604-10-0-local-10.0.130-410.48_1.0-1_amd64.deb
!dpkg -i cuda-repo-ubuntu1604-10-0-local-10.0.130-410.48_1.0-1_amd64.deb
!apt-key add /var/cuda-repo-10-0-local-10.0.130-410.48/7fa2af80.pub
!apt-get update
!apt-get install cuda
!pip install tf-nightly-gpu-2.0-preview
and for install the TensorFlow 2 bye pip you can use:
pip install tf-nightly-gpu-2.0-preview for GPU and
pip install tf-nightly-2.0-preview
for CPU.
A:
I installed tensorflow on conda but didnt seem to work on windows but finally this command here works fine on cmd.
python.exe -m pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.0-py3-none-any.whl
A:
if you tried the solutions above and didin't solve the problem, it can be because of version inconsistency.
I installed python 3.9 and i couldn't install tensorflow with pip.
And then I uninstalled 3.9, then installed 3.8.7 and success... the max version that tensorflow is supported by is 3.8.x (in 2021)
so, check your python version is compatible or not with current tensorflow.
A:
I was facing the same issue. I tried the following and it worked.
installing for Mac OS X, anaconda python 2.7
pip uninstall tensorflow
export TF_BINARY_URL=<get the correct url from http://tflearn.org/installation/>
pip install --upgrade $TF_BINARY_URL
Installed tensorflow-1.0.0
A:
The URL to install TensorFlow in Windows, below is the URL. It worked fine for me.
python -m pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.0-py3-none-any.whl
A:
Nothing here worked for me on Windows 10. Perhaps an updated solution below that did work for me.
python -m pip install --upgrade tensorflow.
This is using Python 3.6 and tensorflow 1.5 on Windows 10
A:
Here is my Environment (Windows 10 with NVIDIA GPU). I wanted to install TensorFlow 1.12-gpu and failed multiple times but was able to solve by following the below approach.
This is to help Installing TensorFlow-GPU on Windows 10 Systems
Steps:
Make sure you have NVIDIA graphic card
a. Go to windows explorer, open device manager-->check “Display
Adaptors”-->it will show (ex. NVIDIA GeForce) if you have GPU else it
will show “HD Graphics”
b. If the GPU is AMD’s then tensorflow doesn’t support AMD’s GPU
If you have a GPU, check whether the GPU supports CUDA features or not.
a. If you find your GPU model at this link, then it supports CUDA.
b. If you don’t have CUDA enabled GPU, then you can install only
tensorflow (without gpu)
Tensorflow requires python-64bit version. Uninstall any python dependencies
a. Go to control panel-->search for “Programs and Features”, and
search “python”
b. Uninstall things like anaconda and any pythons related plugins.
These dependencies might interfere with the tensorflow-GPU
installation.
c. Make sure python is uninstalled. Open a command prompt and type
“python”, if it throws an error, then your system has no python and
your can proceed to freshly install python
Install python freshly
a.TF1.12 supports upto Python 3.6.6. Click here to download Windows
x86-64 executable installer
b. While installing, select “Add Python 3.6 to PATH” and then click
“Install Now”.
c. After successful installation of python, the installation window
provides an option for disabling path length limit which is one of the
root-cause of Tensorflow build/Installation issues in Windows 10
environment. Click “Disable path length limit” and follow the
instructions to complete the installation.
d. Verify whether python installed correctly. Open a command prompt
and type “python”. It should show the version of Python.
Install Visual Studio
Visual Studio 2017 Community
a. Click the "Visual Studio Link" above.Download Visual Studio 2017 Community.
b. Under “Visual Studio IDE” on the left, select “community 2017” and
download it
c. During installation, Select “Desktop development with C++” and
install
CUDA 9.0 toolkit
https://developer.nvidia.com/cuda-90-download-archive?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exelocal
a. Click "Link to CUDA 9.0 toolkit" above, download “Base Installer”
b. Install CUDA 9.0
Install cuDNN
https://developer.nvidia.com/cudnn
a. Click "Link to Install cuDNN" and select “I Agree To the Terms of
the cuDNN Software License Agreement”
b. Register for login, check your email to verify email address
c. Click “cuDNN Download” and fill a short survey to reach “cuDNN
Download” page
d. Select “ I Agree To the Terms of the cuDNN Software License
Agreement”
e. Select “Download cuDNN v7.5.0 (Feb 21, 2019), for CUDA 9.0"
f. In the dropdown, click “cuDNN Library for Windows 10” and download
g. Go to the folder where the file was downloaded, extract the files
h. Add three folders (bin, include, lib) inside the extracted file to
environment
i. Type “environment” in windows 10 search bar and locate the
“Environment Variables” and click “Path” in “User variable” section
and click “Edit” and then select “New” and add those three paths to
three “cuda” folders
j. Close the “Environmental Variables” window.
Install tensorflow-gpu
a. Open a command prompt and type “pip install
--upgrade tensorflow-gpu”
b. It will install tensorflow-gpu
Check whether it was correctly installed or not
a. Type “python” at the command prompt
b. Type “import tensorflow as tf
c. hello=tf.constant(‘Hello World!’)
d. sess=tf.Session()
e. print(sess.run(hello)) -->Hello World!
Test whether tensorflow is using GPU
a. from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
b. print(device_lib.list_local_devices())
A:
Python 3.7 works for me, I uninstalled python 3.8.1 and reinstalled 3.7.6. After that, I executed:
pip3 install --user --upgrade tensorflow
and it works
A:
I had this problem on OSX Sierra 10.12.2. It turns out I had the wrong version of Python installed (I had Python 3.4 but tensorflow pypi packages for OSX are only for python 3.5 and up).
The solution was to install Python 3.6. Here's what I did to get it working. Note: I used Homebrew to install Python 3.6, you could do the same by using the Python 3.6 installer on python.org
brew uninstall python3
brew install python3
python3 --version # Verify that you see "Python 3.6.0"
pip install tensorflow # With python 3.6 the install succeeds
pip install jupyter # "ipython notebook" didn't work for me until I installed jupyter
ipython notebook # Finally works!
A:
For windows this worked for me,
Download the wheel from this link. Then from command line navigate to your download folder where the wheel is present and simply type in the following command -
pip install tensorflow-1.0.0-cp36-cp36m-win_amd64.whl
A:
Excerpt from tensorflow website
https://www.tensorflow.org/install/install_windows
Installing with native pip
If the following version of Python is not installed on your machine, install it now:
Python 3.5.x from python.org
TensorFlow only supports version 3.5.x of Python on Windows. Note that Python 3.5.x comes with the pip3 package manager, which is the program you'll use to install TensorFlow.
To install TensorFlow, start a terminal. Then issue the appropriate pip3 install command in that terminal. To install the CPU-only version of TensorFlow, enter the following command:
C:\> pip3 install --upgrade tensorflow
To install the GPU version of TensorFlow, enter the following command:
C:\> pip3 install --upgrade tensorflow-gpu
A:
If your command pip install --upgrade tensorflowcompiles, then your version of tensorflow should be the newest. I personally prefer to use anaconda. You can easily install and upgrade tensorflow as follows:
conda install -c conda-forge tensorflow # to install
conda upgrade -c conda-forge tensorflow # to upgrade
Also if you want to use it with your GPU you have an easy install:
conda install -c anaconda tensorflow-gpu
I've been using it for a while now and I have never had any problem.
A:
Currently PIP does not have a 32bit version of tensorflow, it worked when I uninstalled python 32bit and installed x64
A:
Note: This answer is for Cygwin users
Leaving this answer because none of the others here worked for my use case (using the *nix-on-Windows terminal environment to install tensorflow on a virtualenv, cygwin (http://www.cygwin.com/)) (at least a simple control+F on the answer pages found nothing).
TLDR: If you are using a virtualenv in a cygwin terminal, know that cygwin seems to have a problem installing tensorflow and throws the error specified in this post's question (a similar sentiment can be found here (https://stackoverflow.com/a/45230106/8236733) (similar cause, different error)). Solved by creating the virtualenv in the Windows Command Prompt. Then can access / activate the virtualenv from a cygwin terminal via source ./Scripts/activate to use Windows' (not cygwin's) python.
When just using cygwin's python3 to try use tensorflow, eg. something like...
apt-cyg install python3-devel
cd python-virtualenv-base
virtualenv -p `which python3` tensorflow-examples
found that there were some problems with installing tensorflow-gpu package using cygwin's python. Was seeing the error
$ pip install tensorflow --user
Collecting tensorflow
Could not find a version that satisfies the requirement tensorflow (from versions: )
No matching distribution found for tensorflow
There are many proposed solutions, none of them helped in my case (they are all generally along the lines of "You probably have python3 for 32-bit achitectures installed, tensorflow requires 64-bit" or some other python mismatch mistake (whereas here, it's simply seems to be that cygwin's python had problems installing tensorflow-gpu)).
What did end up working for me was doing...
Install python3 via the official Windows way for the Windows system (the cygwin system is separate, so uses a different python)
Open the Command Prompt in Windows (not a cygwin terminal) and do...
C:\Users\me\python-virtualenvs-base>python
Python 3.6.2 (v3.6.2:5fd33b5, Jul 8 2017, 04:57:36) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> exit()
C:\Users\me\python-virtualenvs-base>pip -V
pip 9.0.1 from c:\users\me\appdata\local\programs\python\python36\lib\site-packages (python 3.6)
C:\Users\me\python-virtualenvs-base>pip install virtualenv
Collecting virtualenv
Downloading https://files.pythonhosted.org/packages/b6/30/96a02b2287098b23b875bc8c2f58071c35d2efe84f747b64d523721dc2b5/virtualenv-16.0.0-py2.py3-none-any.whl (1.9MB)
100% |████████████████████████████████| 1.9MB 435kB/s
Installing collected packages: virtualenv
Successfully installed virtualenv-16.0.0
You are using pip version 9.0.1, however version 18.0 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
C:\Users\me\python-virtualenvs-base>virtualenv tensorflow-examples
Using base prefix 'c:\\users\\me\\appdata\\local\\programs\\python\\python36'
New python executable in C:\Users\me\python-virtualenvs-base\tensorflow-examples\Scripts\python.exe
Installing setuptools, pip, wheel...done.
Then, can go back to the cygwin terminal, navigate back to that virtualenv that you created in the command prompt and do...
➜ tensorflow-examples source ./Scripts/activate
(tensorflow-examples) ➜ tensorflow-examples python -V
Python 3.6.2
(tensorflow-examples) ➜ tensorflow-examples pip install tensorflow-gpu
Collecting tensorflow-gpu
Downloading
....
Notice you don't do source ./bin/activate in the virtualenv as you would if you had created the virtualenv in cygwin's pseudo-linux environment, but instead do source ./Scripts/activate.
A:
My env: Win 10, python 3.6
pip3 install --upgrade tensorflow
pip install --upgrade tensorflow
With error:
> Collecting tensorflow Could not find a version that satisfies the
> requirement tensorflow (from versions: ) No matching distribution
> found for tensorflow
I also tried pip install tensorflow and pip install tensorflow-gpu.
But error:
> Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow
> Could not find a version that satisfies the requirement tensorflow-gpu (from versions: ) No matching distribution found for tensorflow-gpu
Install OK when tried with Step: (https://www.tensorflow.org/install/install_windows)
Follow the instructions on the Anaconda download site to download
and install Anaconda. https://www.continuum.io/downloads
Create a conda environment named tensorflow by invoking the
following command:
C:> conda create -n tensorflow pip python=3.5
Activate the conda environment by issuing the following command:
C:> activate tensorflow
(tensorflow)C:> # Your prompt should change
Issue the appropriate command to install TensorFlow inside your
conda environment. To install the CPU-only version of TensorFlow,
enter the following command:
(tensorflow)C:> pip install --ignore-installed --upgrade tensorflow
To install the GPU version of TensorFlow, enter the following
command (on a single line):
(tensorflow)C:> pip install --ignore-installed --upgrade tensorflow-gpu
A:
If you are trying to install Tensorflow with Anaconda on Windows, a free advice is to please uninstall anaconda and download a 64-bit Python version, ending with amd64 from releases page. For me, its python-3.7.8-amd64.exe
Then install Tensorflow in a virtual environment by following the instructions on official website of Tensorflow.
A:
I had the same issue and the problem was the AWS machine I was using had an ARM processor!
I had to manually build tensorflow
A:
I was able to install tensorflow-macos and tensrflow-metal on my Mac
$ python -m pip install -U pip
$ pip install tensorflow-macos
$ pip install tensorflow-metal
A:
The correct way to install it would be as mentioned here
$ pip install --upgrade TF_BINARY_URL # Python 2.7
$ pip3 install --upgrade TF_BINARY_URL # Python 3.N
Find the correct TF_BINARY_URL for your hardware from the tensor flow official homepage
A:
The only thing that worked for me was to use Ananconda and create a new conda env with conda create -n tensorflow python=3.5 then activate using activate tensorflow and finally conda install -c conda-forge tensorflow.
This works around every issue I had including ssl certs, proxy settings, and does not need admin access. It should be noted that this is not directly supported by the tensorflow team.
Source
A:
Here is what I did for Windows 10! I also did not disturb my previous installation of Python 2.7
Step1: Install Windows x86-64 executable installer from the link:
https://www.python.org/downloads/release/python-352/
Step2: Open cmd as Administrator
Step3: Type this command:
pip install https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-0.12.0rc0-cp35-cp35m-win_amd64.whl
You should see that it works and as shown in the picture below, I also tried the sample example.
A:
I've found out the problem.
I'm using a Windows computer which has Python 2 installed previously.
After Python 3 is installed (without setting the path, I successfully check the version of pip3 - but the python executable file still points to the Python2)
Then I set the path to the python3 executable file (remove all python2 paths) then start a new command prompt, try to reinstall Tensorflow. It works!
I think this problem could happend on MAC OS too since there is a default python which is on the MAC system.
A:
Check https://pypi.python.org/pypi/tensorflow to see which packages are available.
As of this writing, they don't provide a source package, so if there's no prebuilt one for your platform, this error occurs. If you add -v to the pip command line, you'll see it iterating over the packages that are available at PyPI and discarding them for being incompatible.
You need to either find a prebuilt package somewhere else, or compile tensorflow yourself from its sources by instructions at https://www.tensorflow.org/install/install_sources .
They have a good reason for not building for some platforms though:
A win32 package is missing because TensorFlow's dependency, Bazel, only supports win64.
For win64, only 3.5+ is supported because earlier versions are compiled with compilers without C++11 support.
A:
It seems there could be multiple reasons for tensorFlow not getting installed via pip. The one I faced on windows 10 was that I didn't had supported version of cudnn in my system path. As of now [Dec 2017], tensorflow on windows only supports cudnn v6.1. So, provide the path of cudnn 6.1, if everything else is correct then tensorflow should be installed.
A:
I have experienced the same error while I tried to install tensorflow in an anaconda package.
After struggling a lot, I finally found an easy way to install any package without running into an error.
First create an environment in your anaconda administrator using this command
conda create -n packages
Now activate that environment
activate packages
and try running
pip install tensorflow
After a successful installation, we need to make this environment accessible to jupyter notebook.
For that, you need to install a package called ipykernel using this command
pip install ipykernel
After installing ipykernel enter the following command
python -m ipykernel install --user --name=packages
After running this command, this environment will be added to jupyter notebook
and that's it.
Just go to your jupyter notebook, click on new notebook, and you can see your environment. Select that environment and try importing tensorflow and in case if you want to install any other packages, just activate the environment and install those packages and use that environment in your jupyter
A:
I was having this problem too. When looking at the different .whl files. I noticed there was no 32-bit version of tensorflow on python 3.7. In the end just had to install 64bit Python 3.7 from here.
A:
2.0 COMPATIBLE SOLUTION:
Execute the below commands in Terminal (Linux/MacOS) or in Command Prompt (Windows) to install Tensorflow 2.0 using Pip:
#Install tensorflow using pip virtual env
pip install virtualenv
virtualenv tf_2.0.0 # tf_2.0.0 is virtual env name
source tf_2.0.0/bin/activate
#You should see tf_2.0.0 Env now. Execute the below steps
pip install tensorflow==2.0.0
python
>>import tensorflow as tf
>>tf.__version__
2.0.0
Execute the below commands in Terminal (Linux/MacOS) or in Command Prompt (Windows) to install Tensorflow 2.0 using Bazel:
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow
#The repo defaults to the master development branch. You can also checkout a release branch to build:
git checkout r2.0
#Configure the Build => Use the Below line for Windows Machine
python ./configure.py
#Configure the Build => Use the Below line for Linux/MacOS Machine
./configure
#This script prompts you for the location of TensorFlow dependencies and asks for additional build configuration options.
#Build Tensorflow package
#CPU support
bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
#GPU support
bazel build --config=opt --config=cuda --define=no_tensorflow_py_deps=true //tensorflow/tools/pip_package:build_pip_package
A:
For Window you can use below command
python3 -m pip install --upgrade https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-2.3.0-cp38-cp38-win_amd64.whl
A:
Had similar problem
Turned out the default is GPU version, and I've installed it on a server with no GPU.
pip install --upgrade tensorflow-cpu
Did the trick
A:
It is easier using Git, they are provided the methods on the websites but the link access may not be significant you may read from
references https://www.tensorflow.org/install/source_windows
git clone https://github.com/tensorflow/tensorflow.git
My Python is 3.9.7
I also use Windows 10 with the requirements as below:
1. Microsoft C++ Retribution installed from Microsoft Visual Studio that matches with x64bits as required in the list.
1.1 Microsoft Visual C++ 2012 Redistribution ( x64 ) and updates
1.2 Microsoft Visual C++ 2013 Redistributable (x64) - 12.0.40664
1.3 Microsoft Visual C++ 2015-2019 Redistributable (x64) - 14.29.30133
1.4 vs_community__1795732196.1624941787.exe updates
2. Python and AI learning
tensorboard 2.6.0
tensorboard-data-server 0.6.1
tensorboard-plugin-profile 2.5.0
tensorboard-plugin-wit 1.8.0
***tensorflow 2.6.0
tensorflow-datasets 4.4.0
tensorflow-estimator 2.6.0
***tensorflow-gpu 2.6.0
tensorflow-hub 0.12.0
tensorflow-metadata 1.2.0
tensorflow-text 2.6.0
***PyOpenGL 3.1.5
pyparsing 2.4.7
python-dateutil 2.8.2
python-slugify 5.0.2
python-speech-features 0.6
PyWavelets 1.1.1
PyYAML 5.4.1
scikit-image 0.18.3
scikit-learn 1.0.1
***gym 0.21.0
A:
Something that will tell you specifically what the issue is is to do:
pip install -vvv tensorflow
This will show you the wheel files that are available and why they are not matched.
If you do then pip debug --verbose it will show you all the tags that are compatible.
In my case I was trying to install tensorflow on an m1 mac in a multipass ubuntu instance, and needed https://pypi.org/project/tensorflow-aarch64/ instead
A:
I understand that the issue is pretty old but recently I faced it on MacBook Air M1. The solution was just to use this command pip install tensorflow-macos.
|
TensorFlow not found using pip
|
I'm trying to install TensorFlow using pip:
$ pip install tensorflow --user
Collecting tensorflow
Could not find a version that satisfies the requirement tensorflow (from versions: )
No matching distribution found for tensorflow
What am I doing wrong? So far I've used Python and pip with no issues.
|
[
"I found this to finally work.\npython3 -m pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.12.0-py3-none-any.whl\n\nEdit 1: This was tested on Windows (8, 8.1, 10), Mac and Linux. Change python3 to python according to your configuration. Change py3 to py2 in the url if you are using Python 2.x.\nEdit 2: A list of different versions if someone needs: https://storage.googleapis.com/tensorflow\nEdit 3: A list of urls for the available wheel packages is available here:\nhttps://www.tensorflow.org/install/pip#package-location\n",
"You need a 64-bit version of Python and in your case are using a 32-bit version. As of now Tensorflow only supports 64-bit versions of Python 3.5.x and 3.8.x on Windows. See the install docs to see what is currently supported\nTo check which version of Python you are running, type python or python3 to start the interpreter, and then type import struct;print(struct.calcsize(\"P\") * 8) and that will print either 32 or 64 to tell you which bit version of Python you are running.\nFrom comments:\nTo download a different version of Python for Windows, go to python.org/downloads/windows and scroll down until you see the version you want that ends in a \"64\". That will be the 64 bit version that should work with tensorflow\n",
"You need to use the right version of Python and pip.\nOn Windows 10, with Python 3.6.X version I was facing the same problem, then after checking deliberately I noticed I had the Python-32 bit installation on my 64 bit machine. Remember TensorFlow is only compatible with 64bit installation of Python, not the 32 bit version of Python\n\nIf we download Python from python.org, the default installation would be 32 bit. So we have to download the 64 bit installer manually to install Python 64 bit. And then add below to PATH environment.\nC:\\Users\\AppData\\Local\\Programs\\Python\\Python36\nC:\\Users\\AppData\\Local\\Programs\\Python\\Python36\\Scripts\n\nThen run gpupdate /Force on command prompt. If the Python command doesn't work for 64 bit then restart your machine.\nThen run python on command prompt. It should show 64 bit.\nC:\\Users\\YOURNAME>python\nPython 3.6.3 (v3.6.3:2c5fed8, Oct 3 2017, 18:11:49) [MSC v.1900 64 bit (AMD64)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n\nThen run below command to install tensorflow CPU version (recommended)\npip3 install --upgrade tensorflow\n\n\nOctober 2020 update:\nTensorflow now supports Python 3.5.x through Python 3.8.x, but you still have to use a 64-bit version.\nIf you need to run multiple versions of Python on the same machine, you can use a virtual environment to help manage them.\n",
"From tensorflow website: \"You will need pip version 8.1 or later for the following commands to work\". Run this command to upgrade your pip, then try install tensorflow again:\npip install --upgrade pip\n\n",
"If you are trying to install it on a windows machine you need to have a 64-bit version of python 3.5. This is the only way to actually install it. From the website:\n\nTensorFlow supports only 64-bit Python 3.5 on Windows. We have tested the pip packages with the following distributions of Python:\nPython 3.5 from Anaconda\nPython 3.5 from python.org.\n\nYou can download the proper version of python from here (make sure you grab one of the ones that says \"Windows x86-64\")\nYou should now be able to install with pip install tensorflow or python -m pip install tensorflow (make sure that you are using the right pip, from python3, if you have both python2 and python3 installed)\nRemember to install Anaconda 3-5.2.0 as the latest version which is 3-5.3.0 have python version 3.7 which is not supported by Tensorflow.\n",
"I figured out that TensorFlow 1.12.0 only works with Python version 3.5.2. I had Python 3.7 but that didn't work. So, I had to downgrade Python and then I could install TensorFlow to make it work.\nTo downgrade your python version from 3.7 to 3.6\nconda install python=3.6.8\n\n",
"Updated 11/28/2016: TensorFlow is now available in PyPI, starting with release 0.12. You can type\npip install tensorflow\n\n...or...\npip install tensorflow-gpu\n\n...to install the CPU-only or GPU-accelerated version of TensorFlow respectively.\n\nPrevious answer: TensorFlow is not yet in the PyPI repository, so you have to specify the URL to the appropriate \"wheel file\" for your operating system and Python version.\nThe full list of supported configurations is listed on the TensorFlow website, but for example, to install version 0.10 for Python 2.7 on Linux, using CPU only, you would type the following command:\n$ pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0rc0-cp27-none-linux_x86_64.whl\n\n",
"Install Python 3.5.x 64 bit amd version here. Make sure you add Python to your PATH variable. Then open a command prompt and type \npython -m pip install --upgrade pip\n\nshould give you the following result :\n Collecting pip\n Using cached pip-9.0.1-py2.py3-none-any.whl\n Installing collected packages: pip\n Found existing installation: pip 7.1.2\n Uninstalling pip-7.1.2:\n Successfully uninstalled pip-7.1.2\n Successfully installed pip-9.0.1\n\nNow type \n pip3 install --upgrade tensorflow\n\n",
"I had the same problem and solved with this:\n# Ubuntu/Linux 64-bit, CPU only, Python 2.7\n(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.12.1-cp27-none-linux_x86_64.whl\n\n# Ubuntu/Linux 64-bit, GPU enabled, Python 2.7\n# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see \"Installing from sources\" below.\n\n# Mac OS X, CPU only, Python 2.7:\n(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.1-py2-none-any.whl\n\n# Mac OS X, GPU enabled, Python 2.7:\n(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow_gpu-0.12.1-py2-none-any.whl\n\n# Ubuntu/Linux 64-bit, CPU only, Python 3.4\n(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.12.1-cp34-cp34m-linux_x86_64.whl\n\n# Ubuntu/Linux 64-bit, GPU enabled, Python 3.4\n# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see \"Installing from sources\" below.\n(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-0.12.1-cp34-cp34m-linux_x86_64.whl\n\n# Ubuntu/Linux 64-bit, CPU only, Python 3.5\n(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.12.1-cp35-cp35m-linux_x86_64.whl\n\n# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see \"Installing from sources\" below.\n(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-0.12.1-cp35-cp35m-linux_x86_64.whl\n\n# Mac OS X, CPU only, Python 3.4 or 3.5:\n(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.1-py3-none-any.whl\n\n# Mac OS X, GPU enabled, Python 3.4 or 3.5:\n(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow_gpu-0.12.1-py3-none-any.whl\n\nPlus:\n# Python 2\n(tensorflow)$ pip install --upgrade $TF_BINARY_URL\n\n# Python 3\n(tensorflow)$ pip3 install --upgrade $TF_BINARY_URL\n\nFound on Docs.\nUPDATE!\nThere are new links for new versions\nFor example, for installing tensorflow v1.0.0 in OSX you need to use:\nhttps://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.0-py2-none-any.whl\n\ninstead of\nhttps://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.1-py2-none-any.whl\n\n",
"I had the same error when trying to install on my Mac (using Python 2.7). A similar solution to the one I'm giving here also seemed to work for Python 3 on Windows 8.1 according to a different answer on this page by Yash Kumar Verma\nSolution \nStep 1: go to The URL of the TensorFlow Python package section of the TensorFlow installation page and copy the URL of the relevant link for your Python installation.\nStep 2: open a terminal/command prompt and run the following command:\npip install --upgrade [paste copied url link here] \nSo for me it was the following:\npip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.2.0-py2-none-any.whl\nUpdate (July 21 2017): I tried this with some others who were running on Windows machines with Python 3.6 and they had to change the line in Step 2 to:\npython -m pip install [paste copied url link here]\nUpdate (26 July 2018): For Python 3.6.2 (not 3.7 because it's in 3.6.2 in TF Documentation), you can also use pip3 install --upgrade [paste copied URL here] in Step 2.\n",
"Try this:\nexport TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.1-py3-none-any.whl\npip3 install --upgrade $TF_BINARY_URL\n\nSource: https://www.tensorflow.org/get_started/os_setup (page no longer exists)\nUpdate 2/23/17\nDocumentation moved to: https://www.tensorflow.org/install\n",
"\nInstall python by checking Add Python to Path\npip3 install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.0-py3-none-any.whl\n\nThis works for windows 10.0\n",
"Try this, it should work:\n python.exe -m pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.0-py3-none-any.whl\n\n",
"I had the same problem. After uninstalling the 32-bit version of python and reinstalling the 64-bit version I tried reinstalling TensorFlow and it worked.\nLink to TensorFlow guide: https://www.tensorflow.org/install/install_windows\n",
"If you run into this issue recently (say, after Python 3.7 release in 2018), most likely this is caused by the lack of Python 3.7 support (yet) from the tensorflow side. Try using Python 3.6 instead if you don't mind. There are some tricks you can find from https://github.com/tensorflow/tensorflow/issues/20444, but use them at your own risk. I used the one harpone suggested - first downloaded the tensorflow wheel for Python 3.6 and then renamed it manually...\ncp tensorflow-1.11.0-cp36-cp36m-linux_x86_64.whl tensorflow-1.11.0-cp37-cp37m-linux_x86_64.whl\npip install tensorflow-1.11.0-cp37-cp37m-linux_x86_64.whl\n\nThe good news is that there is a pull request for 3.7 support already. Hope it will be released soon.\n",
"There are multiple groups of answers to this question. This answer aims to generalize one group of answers:\nThere may not be a version of TensorFlow that is compatible with your version of Python. This is particularly true if you're using a new release of Python. For example, there may be a delay between the release of a new version of Python and the release of TensorFlow for that version of Python.\nIn this case, I believe your options are to:\n\nUpgrade or downgrade to a different version of Python. (Virtual environments are good for this, e.g. conda install python=3.6)\nSelect a specific version of tensorflow that is compatible with your version of python, e.g. if you're still using python3.4: pip install tensorflow==2.0\nCompile TensorFlow from the source code.\nWait for a new release of TensorFlow which is compatible with your version of Python.\n\n",
"as of today, if anyone else is wondering,\npython >= 3.9 will cause the same issue\nuninstall python 3.9, and install 3.8 , it should resolve it\n",
"If you are using the Anaconda Python installation, pip install tensorflow will give the error stated above, shown below:\nCollecting tensorflow\nCould not find a version that satisfies the requirement tensorflow (from versions: )\nNo matching distribution found for tensorflow\n\nAccording to the TensorFlow installation page, you will need to use the --ignore-installed flag when running pip install. \nHowever, before this can be done see this link\nto ensure the TF_BINARY_URL variable is set correctly in relation to the desired version of TensorFlow that you wish to install.\n",
"For pyCharm users:\n\nCheck pip version:\npip3 -V\nIf pip is older than 9.0.1:\npy -3 -m pip install --upgrade pip\nThen:\npy -3 -m pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.0-py3-none-any.whl\n\n",
"If you're trying to install tensorflow in anaconda and it isn't working, then you may need to downgrade python version because only 3.6.x is currently supported while anaconda has the latest version.\n\ncheck python version: python --version\nif version > 3.6.x then follow step 3, otherwise stop, the problem may be somewhere else\nconda search python\nconda install python=3.6.6\nCheck version again: python --version\nIf version is correct, install tensorflow (step 7)\npip install tensorflow\n\n",
"Unfortunately my reputation is to low to command underneath @Sujoy answer.\nIn their docs they claim to support python 3.6.\nThe link provided by @mayur shows that their is indeed only a python3.5 wheel package. This is my try to install tensorflow:\nMicrosoft Windows [Version 10.0.16299.371]\n(c) 2017 Microsoft Corporation. All rights reserved.\n\nC:\\>python3 -m pip install --upgrade pip\nRequirement already up-to-date: pip in d:\\python\\v3\\lib\\site-packages (10.0.0)\n\nC:\\>python3 -m pip -V\npip 10.0.0 from D:\\Python\\V3\\lib\\site-packages\\pip (python 3.6)\n\nC:\\>python3 -m pip install --upgrade tensorflow\nCollecting tensorflow\nCould not find a version that satisfies the requirement tensorflow (from versions: )\nNo matching distribution found for tensorflow\n\nwhile python 3.5 seems to install successfully. I would love to see a python3.6 version since they claim it should also work on python3.6.\nQuoted :\n\"TensorFlow supports Python 3.5.x and 3.6.x on Windows. Note that Python 3 comes with the pip3 package manager, which is the program you'll use to install TensorFlow.\"\nSource : https://www.tensorflow.org/install/install_windows\nPython3.5 install :\nMicrosoft Windows [Version 10.0.16299.371]\n(c) 2017 Microsoft Corporation. All rights reserved.\n\nC:\\>python3 -m pip install --upgrade pip\nRequirement already up-to-date: pip in d:\\python\\v3\\lib\\site-packages (10.0.0)\n\nC:\\>python3 -m pip -V\npip 10.0.0 from D:\\Python\\V3_5\\lib\\site-packages\\pip (python 3.5.2)\n\nC:\\>python3 -m pip install --upgrade tensorflow\nCollecting tensorflow\n Downloading \n ....\n ....\n\nI hope i am terrible wrong here but if not ring a alarm bell \nEdit:\nA couple of posts below someone pointed out that the following command would work and it did.\npython3 -m pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.0-py3-none-any.whl\n\nStrange pip is not working \n",
"Following these steps allows you to install tensorflow and keras:\n\nDownload Anaconda3-5.2.0 which comes with python 3.6 from https://repo.anaconda.com/archive/\n\nInstall Anaconda and open Anaconda Prompt and execute below commands\nconda install jupyter \nconda install scipy\npip install sklearn\npip install msgpack\npip install pandas\npip install pandas-datareader\npip install matplotlib \npip install pillow\npip install requests\npip install h5py\npip install tensorflow\npip install keras\n\n\n\n",
"Tensorflow DOES NOT support python versions after 3.8 as of when I'm writing this at least (December 2020). Use this: https://www.tensorflow.org/install to check what python versions it supports, I just spent hours looking through these answers, took me way too long to realise that.\n",
"This worked for me with Python 2.7 on Mac OS X Yosemite 10.10.5:\nsudo pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl\n\n",
"\nStart Command Prompt with Administrative Permission\nEnter following command python -m pip install --upgrade pip\nNext Enter command pip install tensorflow\n\n",
"update 2019:\nfor install the preview version of TensorFlow 2 in Google Colab you can use:\n!wget https://developer.nvidia.com/compute/cuda/10.0/Prod/local_installers/cuda-repo-ubuntu1604-10-0-local-10.0.130-410.48_1.0-1_amd64 -O cuda-repo-ubuntu1604-10-0-local-10.0.130-410.48_1.0-1_amd64.deb\n!dpkg -i cuda-repo-ubuntu1604-10-0-local-10.0.130-410.48_1.0-1_amd64.deb\n!apt-key add /var/cuda-repo-10-0-local-10.0.130-410.48/7fa2af80.pub\n!apt-get update\n!apt-get install cuda\n!pip install tf-nightly-gpu-2.0-preview\n\nand for install the TensorFlow 2 bye pip you can use:\npip install tf-nightly-gpu-2.0-preview for GPU and\npip install tf-nightly-2.0-preview\nfor CPU.\n",
"I installed tensorflow on conda but didnt seem to work on windows but finally this command here works fine on cmd.\n\n python.exe -m pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.0-py3-none-any.whl\n\n",
"if you tried the solutions above and didin't solve the problem, it can be because of version inconsistency.\nI installed python 3.9 and i couldn't install tensorflow with pip.\nAnd then I uninstalled 3.9, then installed 3.8.7 and success... the max version that tensorflow is supported by is 3.8.x (in 2021)\nso, check your python version is compatible or not with current tensorflow.\n",
"I was facing the same issue. I tried the following and it worked.\ninstalling for Mac OS X, anaconda python 2.7\npip uninstall tensorflow\nexport TF_BINARY_URL=<get the correct url from http://tflearn.org/installation/>\npip install --upgrade $TF_BINARY_URL\n\nInstalled tensorflow-1.0.0\n",
"The URL to install TensorFlow in Windows, below is the URL. It worked fine for me.\npython -m pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.0-py3-none-any.whl\n\n",
"Nothing here worked for me on Windows 10. Perhaps an updated solution below that did work for me.\npython -m pip install --upgrade tensorflow.\nThis is using Python 3.6 and tensorflow 1.5 on Windows 10\n",
"Here is my Environment (Windows 10 with NVIDIA GPU). I wanted to install TensorFlow 1.12-gpu and failed multiple times but was able to solve by following the below approach.\nThis is to help Installing TensorFlow-GPU on Windows 10 Systems\nSteps:\n\nMake sure you have NVIDIA graphic card\n\n\na. Go to windows explorer, open device manager-->check “Display\n Adaptors”-->it will show (ex. NVIDIA GeForce) if you have GPU else it\n will show “HD Graphics” \nb. If the GPU is AMD’s then tensorflow doesn’t support AMD’s GPU\n\n\nIf you have a GPU, check whether the GPU supports CUDA features or not.\n\n\na. If you find your GPU model at this link, then it supports CUDA. \nb. If you don’t have CUDA enabled GPU, then you can install only\n tensorflow (without gpu)\n\n\nTensorflow requires python-64bit version. Uninstall any python dependencies\n\n\na. Go to control panel-->search for “Programs and Features”, and\n search “python” \nb. Uninstall things like anaconda and any pythons related plugins.\n These dependencies might interfere with the tensorflow-GPU\n installation.\nc. Make sure python is uninstalled. Open a command prompt and type\n “python”, if it throws an error, then your system has no python and\n your can proceed to freshly install python\n\n\nInstall python freshly\n\n\na.TF1.12 supports upto Python 3.6.6. Click here to download Windows\n x86-64 executable installer \nb. While installing, select “Add Python 3.6 to PATH” and then click\n “Install Now”.\n\n\n\nc. After successful installation of python, the installation window\n provides an option for disabling path length limit which is one of the\n root-cause of Tensorflow build/Installation issues in Windows 10\n environment. Click “Disable path length limit” and follow the\n instructions to complete the installation.\n\n\n\nd. Verify whether python installed correctly. Open a command prompt\n and type “python”. It should show the version of Python.\n\n\n\nInstall Visual Studio\n\nVisual Studio 2017 Community \n\na. Click the \"Visual Studio Link\" above.Download Visual Studio 2017 Community.\nb. Under “Visual Studio IDE” on the left, select “community 2017” and\n download it\nc. During installation, Select “Desktop development with C++” and\n install\n\n\nCUDA 9.0 toolkit\n\nhttps://developer.nvidia.com/cuda-90-download-archive?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exelocal\n\na. Click \"Link to CUDA 9.0 toolkit\" above, download “Base Installer”\nb. Install CUDA 9.0\n\n\nInstall cuDNN\n\nhttps://developer.nvidia.com/cudnn\n\na. Click \"Link to Install cuDNN\" and select “I Agree To the Terms of\n the cuDNN Software License Agreement”\nb. Register for login, check your email to verify email address\nc. Click “cuDNN Download” and fill a short survey to reach “cuDNN\n Download” page\nd. Select “ I Agree To the Terms of the cuDNN Software License\n Agreement”\ne. Select “Download cuDNN v7.5.0 (Feb 21, 2019), for CUDA 9.0\"\nf. In the dropdown, click “cuDNN Library for Windows 10” and download\ng. Go to the folder where the file was downloaded, extract the files\n\n\n\nh. Add three folders (bin, include, lib) inside the extracted file to\n environment\n\n\n\ni. Type “environment” in windows 10 search bar and locate the\n “Environment Variables” and click “Path” in “User variable” section\n and click “Edit” and then select “New” and add those three paths to\n three “cuda” folders\nj. Close the “Environmental Variables” window.\n\n\nInstall tensorflow-gpu\n\n\na. Open a command prompt and type “pip install\n --upgrade tensorflow-gpu”\nb. It will install tensorflow-gpu\n\n\nCheck whether it was correctly installed or not\n\n\na. Type “python” at the command prompt\nb. Type “import tensorflow as tf\nc. hello=tf.constant(‘Hello World!’)\nd. sess=tf.Session()\ne. print(sess.run(hello)) -->Hello World!\n\n\nTest whether tensorflow is using GPU\n\n\na. from tensorflow.python.client import device_lib\n print(device_lib.list_local_devices())\nb. print(device_lib.list_local_devices())\n\n",
"Python 3.7 works for me, I uninstalled python 3.8.1 and reinstalled 3.7.6. After that, I executed: \npip3 install --user --upgrade tensorflow\n\nand it works \n",
"I had this problem on OSX Sierra 10.12.2. It turns out I had the wrong version of Python installed (I had Python 3.4 but tensorflow pypi packages for OSX are only for python 3.5 and up). \nThe solution was to install Python 3.6. Here's what I did to get it working. Note: I used Homebrew to install Python 3.6, you could do the same by using the Python 3.6 installer on python.org\nbrew uninstall python3\nbrew install python3\npython3 --version # Verify that you see \"Python 3.6.0\"\npip install tensorflow # With python 3.6 the install succeeds\npip install jupyter # \"ipython notebook\" didn't work for me until I installed jupyter\nipython notebook # Finally works!\n\n",
"For windows this worked for me,\nDownload the wheel from this link. Then from command line navigate to your download folder where the wheel is present and simply type in the following command - \npip install tensorflow-1.0.0-cp36-cp36m-win_amd64.whl\n",
"Excerpt from tensorflow website\nhttps://www.tensorflow.org/install/install_windows\n\nInstalling with native pip\nIf the following version of Python is not installed on your machine, install it now:\nPython 3.5.x from python.org\nTensorFlow only supports version 3.5.x of Python on Windows. Note that Python 3.5.x comes with the pip3 package manager, which is the program you'll use to install TensorFlow.\nTo install TensorFlow, start a terminal. Then issue the appropriate pip3 install command in that terminal. To install the CPU-only version of TensorFlow, enter the following command:\n\nC:\\> pip3 install --upgrade tensorflow\nTo install the GPU version of TensorFlow, enter the following command:\n\nC:\\> pip3 install --upgrade tensorflow-gpu\n\n",
"If your command pip install --upgrade tensorflowcompiles, then your version of tensorflow should be the newest. I personally prefer to use anaconda. You can easily install and upgrade tensorflow as follows:\n conda install -c conda-forge tensorflow # to install\n conda upgrade -c conda-forge tensorflow # to upgrade\n\nAlso if you want to use it with your GPU you have an easy install:\n conda install -c anaconda tensorflow-gpu\n\nI've been using it for a while now and I have never had any problem.\n",
"Currently PIP does not have a 32bit version of tensorflow, it worked when I uninstalled python 32bit and installed x64\n",
"Note: This answer is for Cygwin users\nLeaving this answer because none of the others here worked for my use case (using the *nix-on-Windows terminal environment to install tensorflow on a virtualenv, cygwin (http://www.cygwin.com/)) (at least a simple control+F on the answer pages found nothing).\nTLDR: If you are using a virtualenv in a cygwin terminal, know that cygwin seems to have a problem installing tensorflow and throws the error specified in this post's question (a similar sentiment can be found here (https://stackoverflow.com/a/45230106/8236733) (similar cause, different error)). Solved by creating the virtualenv in the Windows Command Prompt. Then can access / activate the virtualenv from a cygwin terminal via source ./Scripts/activate to use Windows' (not cygwin's) python.\n\nWhen just using cygwin's python3 to try use tensorflow, eg. something like...\napt-cyg install python3-devel\ncd python-virtualenv-base\nvirtualenv -p `which python3` tensorflow-examples\n\nfound that there were some problems with installing tensorflow-gpu package using cygwin's python. Was seeing the error\n\n$ pip install tensorflow --user\nCollecting tensorflow\nCould not find a version that satisfies the requirement tensorflow (from versions: )\nNo matching distribution found for tensorflow\n\n\nThere are many proposed solutions, none of them helped in my case (they are all generally along the lines of \"You probably have python3 for 32-bit achitectures installed, tensorflow requires 64-bit\" or some other python mismatch mistake (whereas here, it's simply seems to be that cygwin's python had problems installing tensorflow-gpu)).\nWhat did end up working for me was doing...\n\nInstall python3 via the official Windows way for the Windows system (the cygwin system is separate, so uses a different python)\nOpen the Command Prompt in Windows (not a cygwin terminal) and do...\n\nC:\\Users\\me\\python-virtualenvs-base>python\nPython 3.6.2 (v3.6.2:5fd33b5, Jul 8 2017, 04:57:36) [MSC v.1900 64 bit (AMD64)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> exit()\n\nC:\\Users\\me\\python-virtualenvs-base>pip -V\npip 9.0.1 from c:\\users\\me\\appdata\\local\\programs\\python\\python36\\lib\\site-packages (python 3.6)\n\nC:\\Users\\me\\python-virtualenvs-base>pip install virtualenv\nCollecting virtualenv\n Downloading https://files.pythonhosted.org/packages/b6/30/96a02b2287098b23b875bc8c2f58071c35d2efe84f747b64d523721dc2b5/virtualenv-16.0.0-py2.py3-none-any.whl (1.9MB)\n 100% |████████████████████████████████| 1.9MB 435kB/s\nInstalling collected packages: virtualenv\nSuccessfully installed virtualenv-16.0.0\nYou are using pip version 9.0.1, however version 18.0 is available.\nYou should consider upgrading via the 'python -m pip install --upgrade pip' command.\n\nC:\\Users\\me\\python-virtualenvs-base>virtualenv tensorflow-examples\nUsing base prefix 'c:\\\\users\\\\me\\\\appdata\\\\local\\\\programs\\\\python\\\\python36'\nNew python executable in C:\\Users\\me\\python-virtualenvs-base\\tensorflow-examples\\Scripts\\python.exe\nInstalling setuptools, pip, wheel...done.\n\n\nThen, can go back to the cygwin terminal, navigate back to that virtualenv that you created in the command prompt and do...\n\n ➜ tensorflow-examples source ./Scripts/activate\n (tensorflow-examples) ➜ tensorflow-examples python -V\n Python 3.6.2\n (tensorflow-examples) ➜ tensorflow-examples pip install tensorflow-gpu\n Collecting tensorflow-gpu\n Downloading \n ....\n\nNotice you don't do source ./bin/activate in the virtualenv as you would if you had created the virtualenv in cygwin's pseudo-linux environment, but instead do source ./Scripts/activate.\n",
"My env: Win 10, python 3.6 \npip3 install --upgrade tensorflow\npip install --upgrade tensorflow\n\nWith error:\n> Collecting tensorflow Could not find a version that satisfies the\n> requirement tensorflow (from versions: ) No matching distribution\n> found for tensorflow\n\nI also tried pip install tensorflow and pip install tensorflow-gpu.\nBut error:\n> Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow\n> Could not find a version that satisfies the requirement tensorflow-gpu (from versions: ) No matching distribution found for tensorflow-gpu\n\nInstall OK when tried with Step: (https://www.tensorflow.org/install/install_windows)\n\nFollow the instructions on the Anaconda download site to download\nand install Anaconda. https://www.continuum.io/downloads\nCreate a conda environment named tensorflow by invoking the\nfollowing command:\nC:> conda create -n tensorflow pip python=3.5 \n\nActivate the conda environment by issuing the following command:\nC:> activate tensorflow\n (tensorflow)C:> # Your prompt should change \n\nIssue the appropriate command to install TensorFlow inside your\nconda environment. To install the CPU-only version of TensorFlow,\nenter the following command:\n(tensorflow)C:> pip install --ignore-installed --upgrade tensorflow \n\nTo install the GPU version of TensorFlow, enter the following\ncommand (on a single line):\n(tensorflow)C:> pip install --ignore-installed --upgrade tensorflow-gpu \n\n\n",
"If you are trying to install Tensorflow with Anaconda on Windows, a free advice is to please uninstall anaconda and download a 64-bit Python version, ending with amd64 from releases page. For me, its python-3.7.8-amd64.exe\nThen install Tensorflow in a virtual environment by following the instructions on official website of Tensorflow.\n",
"I had the same issue and the problem was the AWS machine I was using had an ARM processor!\nI had to manually build tensorflow\n",
"I was able to install tensorflow-macos and tensrflow-metal on my Mac\n$ python -m pip install -U pip\n$ pip install tensorflow-macos\n$ pip install tensorflow-metal\n\n",
"The correct way to install it would be as mentioned here\n$ pip install --upgrade TF_BINARY_URL # Python 2.7\n$ pip3 install --upgrade TF_BINARY_URL # Python 3.N\n\nFind the correct TF_BINARY_URL for your hardware from the tensor flow official homepage\n",
"The only thing that worked for me was to use Ananconda and create a new conda env with conda create -n tensorflow python=3.5 then activate using activate tensorflow and finally conda install -c conda-forge tensorflow. \nThis works around every issue I had including ssl certs, proxy settings, and does not need admin access. It should be noted that this is not directly supported by the tensorflow team.\nSource\n",
"Here is what I did for Windows 10! I also did not disturb my previous installation of Python 2.7\nStep1: Install Windows x86-64 executable installer from the link: \nhttps://www.python.org/downloads/release/python-352/\nStep2: Open cmd as Administrator\n\nStep3: Type this command: \npip install https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-0.12.0rc0-cp35-cp35m-win_amd64.whl\n\nYou should see that it works and as shown in the picture below, I also tried the sample example. \n",
"I've found out the problem.\nI'm using a Windows computer which has Python 2 installed previously.\nAfter Python 3 is installed (without setting the path, I successfully check the version of pip3 - but the python executable file still points to the Python2)\nThen I set the path to the python3 executable file (remove all python2 paths) then start a new command prompt, try to reinstall Tensorflow. It works!\nI think this problem could happend on MAC OS too since there is a default python which is on the MAC system.\n",
"Check https://pypi.python.org/pypi/tensorflow to see which packages are available.\nAs of this writing, they don't provide a source package, so if there's no prebuilt one for your platform, this error occurs. If you add -v to the pip command line, you'll see it iterating over the packages that are available at PyPI and discarding them for being incompatible.\nYou need to either find a prebuilt package somewhere else, or compile tensorflow yourself from its sources by instructions at https://www.tensorflow.org/install/install_sources .\nThey have a good reason for not building for some platforms though:\n\nA win32 package is missing because TensorFlow's dependency, Bazel, only supports win64.\nFor win64, only 3.5+ is supported because earlier versions are compiled with compilers without C++11 support.\n\n",
"It seems there could be multiple reasons for tensorFlow not getting installed via pip. The one I faced on windows 10 was that I didn't had supported version of cudnn in my system path. As of now [Dec 2017], tensorflow on windows only supports cudnn v6.1. So, provide the path of cudnn 6.1, if everything else is correct then tensorflow should be installed.\n",
"I have experienced the same error while I tried to install tensorflow in an anaconda package.\nAfter struggling a lot, I finally found an easy way to install any package without running into an error.\nFirst create an environment in your anaconda administrator using this command \nconda create -n packages\n\nNow activate that environment \nactivate packages \n\nand try running \npip install tensorflow \n\nAfter a successful installation, we need to make this environment accessible to jupyter notebook. \nFor that, you need to install a package called ipykernel using this command \npip install ipykernel\n\nAfter installing ipykernel enter the following command \npython -m ipykernel install --user --name=packages\n\nAfter running this command, this environment will be added to jupyter notebook\nand that's it.\nJust go to your jupyter notebook, click on new notebook, and you can see your environment. Select that environment and try importing tensorflow and in case if you want to install any other packages, just activate the environment and install those packages and use that environment in your jupyter \n",
"I was having this problem too. When looking at the different .whl files. I noticed there was no 32-bit version of tensorflow on python 3.7. In the end just had to install 64bit Python 3.7 from here.\n",
"2.0 COMPATIBLE SOLUTION:\nExecute the below commands in Terminal (Linux/MacOS) or in Command Prompt (Windows) to install Tensorflow 2.0 using Pip:\n#Install tensorflow using pip virtual env \npip install virtualenv\nvirtualenv tf_2.0.0 # tf_2.0.0 is virtual env name\nsource tf_2.0.0/bin/activate\n#You should see tf_2.0.0 Env now. Execute the below steps\npip install tensorflow==2.0.0\npython\n>>import tensorflow as tf\n>>tf.__version__\n2.0.0\n\nExecute the below commands in Terminal (Linux/MacOS) or in Command Prompt (Windows) to install Tensorflow 2.0 using Bazel:\ngit clone https://github.com/tensorflow/tensorflow.git\ncd tensorflow\n\n#The repo defaults to the master development branch. You can also checkout a release branch to build:\ngit checkout r2.0\n\n#Configure the Build => Use the Below line for Windows Machine\npython ./configure.py \n\n#Configure the Build => Use the Below line for Linux/MacOS Machine\n./configure\n#This script prompts you for the location of TensorFlow dependencies and asks for additional build configuration options. \n\n#Build Tensorflow package\n\n#CPU support\nbazel build --config=opt //tensorflow/tools/pip_package:build_pip_package \n\n#GPU support\nbazel build --config=opt --config=cuda --define=no_tensorflow_py_deps=true //tensorflow/tools/pip_package:build_pip_package\n\n",
"For Window you can use below command\npython3 -m pip install --upgrade https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-2.3.0-cp38-cp38-win_amd64.whl\n\n",
"Had similar problem\nTurned out the default is GPU version, and I've installed it on a server with no GPU.\npip install --upgrade tensorflow-cpu\n\nDid the trick\n",
"It is easier using Git, they are provided the methods on the websites but the link access may not be significant you may read from\nreferences https://www.tensorflow.org/install/source_windows\ngit clone https://github.com/tensorflow/tensorflow.git\nMy Python is 3.9.7\nI also use Windows 10 with the requirements as below:\n\n1. Microsoft C++ Retribution installed from Microsoft Visual Studio that matches with x64bits as required in the list.\n\n1.1 Microsoft Visual C++ 2012 Redistribution ( x64 ) and updates \n1.2 Microsoft Visual C++ 2013 Redistributable (x64) - 12.0.40664\n1.3 Microsoft Visual C++ 2015-2019 Redistributable (x64) - 14.29.30133\n1.4 vs_community__1795732196.1624941787.exe updates\n\n2. Python and AI learning \ntensorboard 2.6.0\ntensorboard-data-server 0.6.1\ntensorboard-plugin-profile 2.5.0\ntensorboard-plugin-wit 1.8.0\n***tensorflow 2.6.0\ntensorflow-datasets 4.4.0\ntensorflow-estimator 2.6.0\n***tensorflow-gpu 2.6.0\ntensorflow-hub 0.12.0\ntensorflow-metadata 1.2.0\ntensorflow-text 2.6.0\n***PyOpenGL 3.1.5\npyparsing 2.4.7\npython-dateutil 2.8.2\npython-slugify 5.0.2\npython-speech-features 0.6\nPyWavelets 1.1.1\nPyYAML 5.4.1\nscikit-image 0.18.3\nscikit-learn 1.0.1\n***gym 0.21.0\n\n\n\n\n\n",
"Something that will tell you specifically what the issue is is to do:\npip install -vvv tensorflow\nThis will show you the wheel files that are available and why they are not matched.\nIf you do then pip debug --verbose it will show you all the tags that are compatible.\nIn my case I was trying to install tensorflow on an m1 mac in a multipass ubuntu instance, and needed https://pypi.org/project/tensorflow-aarch64/ instead\n",
"I understand that the issue is pretty old but recently I faced it on MacBook Air M1. The solution was just to use this command pip install tensorflow-macos.\n"
] |
[
833,
326,
90,
55,
43,
43,
21,
16,
11,
11,
7,
7,
7,
6,
6,
6,
6,
5,
5,
5,
4,
4,
4,
3,
3,
3,
3,
3,
2,
2,
2,
2,
2,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] |
[
"You may try this\npip install --upgrade tensorflow\n\n",
"The above answers helped me to solve my issue specially the first answer. But adding to that point after the checking the version of python and we need it to be 64 bit version.\nBased on the operating system you have we can use the following command to install tensorflow using pip command.\nThe following link has google api links which can be added at the end of the following command to install tensorflow in your respective machine.\nRoot command: python -m pip install --upgrade (link)\nlink : respective OS link present in this link\n"
] |
[
-1,
-1
] |
[
"pip",
"python",
"tensorflow"
] |
stackoverflow_0038896424_pip_python_tensorflow.txt
|
Q:
Webscraping using Scrapy, where is the output?
I am trying to build a spider, that gathers information regarding startups. Therefore I wrote a Python script with scrapy that should access the website and store the information in a dictionary. I think the code should work from a logik point of view, but somehow I do not get any output. My code:
import scrapy
class StartupsSpider(scrapy.Spider):
name = 'startups'
#name of the spider
allowed_domains = ['www.bmwk.de/Navigation/DE/InvestDB/INVEST-DB_Liste/investdb.html']
#list of allowed domains
start_urls = ['https://bmwk.de/Navigation/DE/InvestDB/INVEST-DB_Liste/investdb.html']
#starting url
def parse(self, response):
startups = response.xpath('//*[contains(@class,"card-link-overlay")]/@href').getall()
#parse initial start URL for the specific startup URL
for startup in startups:
absolute_url = response.urljoin(startup)
yield scrapy.Request(absolute_url, callback=self.parse_startup)
#parse the actual startup information
next_page_url = response.xpath('//*[@class ="pagination-link"]/@href').get()
#link to next page
absolute_next_page_url = response.urljoin(next_page_url)
#go through all pages on start URL
yield scrapy.Request(absolute_next_page_url)
def parse_startup(self, response):
#get information regarding startup
startup_name = response.css('h1::text').get()
startup_hompage = response.xpath('//*[@class="document-info-item"]/a/@href').get()
startup_description = response.css('div.document-info-item::text')[16].get()
branche = response.css('div.document-info-item::text')[4].get()
founded = response.xpath('//*[@class="date"]/text()')[0].getall()
employees = response.css('div.document-info-item::text')[9].get()
capital = response.css('div.document-info-item::text')[11].get()
applied_for_invest = response.xpath('//*[@class="date"]/text()')[1].getall()
contact_name = response.css('p.card-title-subtitle::text').get()
contact_phone = response.css('p.tel > span::text').get()
contact_mail = response.xpath('//*[@class ="person-contact"]/p/a/span/text()').get()
contact_address_street = response.xpath('//*[@class ="adr"]/text()').get()
contact_address_plz = response.xpath('//*[@class ="locality"]/text()').getall()
contact_state = response.xpath('//*[@class ="country-name"]/text()').get()
yield{'Startup':startup_name,
'Homepage': startup_hompage,
'Description': startup_description,
'Branche': branche,
'Gründungsdatum': founded,
'Anzahl Mitarbeiter':employees,
'Kapital Bedarf':capital,
'Datum des Förderbescheids':applied_for_invest,
'Contact': contact_name,
'Telefon':contact_phone,
'E-Mail':contact_mail,
'Adresse': contact_address_street + contact_address_plz + contact_state}
A:
You're not getting output because your allowed_domains is wrong.
In the last line (Adresse), you're trying to concatenate list and str types so you'll get an error.
Your pagination link is wrong, in the first page you're getting the next page, and in the second page you're getting the previous page.
You're not doing any error checking. In some pages you're getting None for some of the values and you're trying to get their i'th character which results in an error.
I fixed 1, 2, and 3. But you'll need to fix number 4 yourself.
import scrapy
class StartupsSpider(scrapy.Spider):
# name of the spider
name = 'startups'
# list of allowed domains
allowed_domains = ['bmwk.de']
# starting url
start_urls = ['https://bmwk.de/Navigation/DE/InvestDB/INVEST-DB_Liste/investdb.html']
def parse(self, response):
# parse initial start URL for the specific startup URL
startups = response.xpath('//*[contains(@class,"card-link-overlay")]/@href').getall()
for startup in startups:
absolute_url = response.urljoin(startup)
# parse the actual startup information
yield scrapy.Request(absolute_url, callback=self.parse_startup)
# link to next page
next_page_url = response.xpath('(//*[@class ="pagination-link"])[last()]/@href').get()
if next_page_url:
# go through all pages on start URL
absolute_next_page_url = response.urljoin(next_page_url)
yield scrapy.Request(absolute_next_page_url)
def parse_startup(self, response):
# get information regarding startup
startup_name = response.css('h1::text').get()
startup_hompage = response.xpath('//*[@class="document-info-item"]/a/@href').get()
# for example for some of the pages you'll get an error here:
startup_description = response.css('div.document-info-item::text')[16].get()
branche = response.css('div.document-info-item::text')[4].get()
founded = response.xpath('//*[@class="date"]/text()')[0].getall()
employees = response.css('div.document-info-item::text')[9].get()
capital = response.css('div.document-info-item::text')[11].get()
applied_for_invest = response.xpath('//*[@class="date"]/text()')[1].getall()
contact_name = response.css('p.card-title-subtitle::text').get()
contact_phone = response.css('p.tel > span::text').get()
contact_mail = response.xpath('//*[@class ="person-contact"]/p/a/span/text()').get()
Adresse = ' '.join(response.xpath('//*[@class ="address"]//text()').getall())
yield {'Startup': startup_name,
'Homepage': startup_hompage,
'Description': startup_description,
'Branche': branche,
'Gründungsdatum': founded,
'Anzahl Mitarbeiter': employees,
'Kapital Bedarf': capital,
'Datum des Förderbescheids': applied_for_invest,
'Contact': contact_name,
'Telefon': contact_phone,
'E-Mail': contact_mail,
'Adresse': Adresse}
A:
you need to run in prompt:
scrapy crawl -o filename.(json or csv)
|
Webscraping using Scrapy, where is the output?
|
I am trying to build a spider, that gathers information regarding startups. Therefore I wrote a Python script with scrapy that should access the website and store the information in a dictionary. I think the code should work from a logik point of view, but somehow I do not get any output. My code:
import scrapy
class StartupsSpider(scrapy.Spider):
name = 'startups'
#name of the spider
allowed_domains = ['www.bmwk.de/Navigation/DE/InvestDB/INVEST-DB_Liste/investdb.html']
#list of allowed domains
start_urls = ['https://bmwk.de/Navigation/DE/InvestDB/INVEST-DB_Liste/investdb.html']
#starting url
def parse(self, response):
startups = response.xpath('//*[contains(@class,"card-link-overlay")]/@href').getall()
#parse initial start URL for the specific startup URL
for startup in startups:
absolute_url = response.urljoin(startup)
yield scrapy.Request(absolute_url, callback=self.parse_startup)
#parse the actual startup information
next_page_url = response.xpath('//*[@class ="pagination-link"]/@href').get()
#link to next page
absolute_next_page_url = response.urljoin(next_page_url)
#go through all pages on start URL
yield scrapy.Request(absolute_next_page_url)
def parse_startup(self, response):
#get information regarding startup
startup_name = response.css('h1::text').get()
startup_hompage = response.xpath('//*[@class="document-info-item"]/a/@href').get()
startup_description = response.css('div.document-info-item::text')[16].get()
branche = response.css('div.document-info-item::text')[4].get()
founded = response.xpath('//*[@class="date"]/text()')[0].getall()
employees = response.css('div.document-info-item::text')[9].get()
capital = response.css('div.document-info-item::text')[11].get()
applied_for_invest = response.xpath('//*[@class="date"]/text()')[1].getall()
contact_name = response.css('p.card-title-subtitle::text').get()
contact_phone = response.css('p.tel > span::text').get()
contact_mail = response.xpath('//*[@class ="person-contact"]/p/a/span/text()').get()
contact_address_street = response.xpath('//*[@class ="adr"]/text()').get()
contact_address_plz = response.xpath('//*[@class ="locality"]/text()').getall()
contact_state = response.xpath('//*[@class ="country-name"]/text()').get()
yield{'Startup':startup_name,
'Homepage': startup_hompage,
'Description': startup_description,
'Branche': branche,
'Gründungsdatum': founded,
'Anzahl Mitarbeiter':employees,
'Kapital Bedarf':capital,
'Datum des Förderbescheids':applied_for_invest,
'Contact': contact_name,
'Telefon':contact_phone,
'E-Mail':contact_mail,
'Adresse': contact_address_street + contact_address_plz + contact_state}
|
[
"\nYou're not getting output because your allowed_domains is wrong.\nIn the last line (Adresse), you're trying to concatenate list and str types so you'll get an error.\nYour pagination link is wrong, in the first page you're getting the next page, and in the second page you're getting the previous page.\nYou're not doing any error checking. In some pages you're getting None for some of the values and you're trying to get their i'th character which results in an error.\n\nI fixed 1, 2, and 3. But you'll need to fix number 4 yourself.\nimport scrapy\n\n\nclass StartupsSpider(scrapy.Spider):\n # name of the spider\n name = 'startups'\n\n # list of allowed domains\n allowed_domains = ['bmwk.de']\n\n # starting url\n start_urls = ['https://bmwk.de/Navigation/DE/InvestDB/INVEST-DB_Liste/investdb.html']\n \n def parse(self, response):\n # parse initial start URL for the specific startup URL\n startups = response.xpath('//*[contains(@class,\"card-link-overlay\")]/@href').getall()\n\n for startup in startups:\n absolute_url = response.urljoin(startup)\n\n # parse the actual startup information\n yield scrapy.Request(absolute_url, callback=self.parse_startup)\n\n # link to next page\n next_page_url = response.xpath('(//*[@class =\"pagination-link\"])[last()]/@href').get()\n if next_page_url:\n # go through all pages on start URL\n absolute_next_page_url = response.urljoin(next_page_url)\n yield scrapy.Request(absolute_next_page_url)\n\n def parse_startup(self, response):\n # get information regarding startup\n startup_name = response.css('h1::text').get()\n startup_hompage = response.xpath('//*[@class=\"document-info-item\"]/a/@href').get()\n # for example for some of the pages you'll get an error here:\n startup_description = response.css('div.document-info-item::text')[16].get()\n branche = response.css('div.document-info-item::text')[4].get()\n founded = response.xpath('//*[@class=\"date\"]/text()')[0].getall()\n employees = response.css('div.document-info-item::text')[9].get()\n capital = response.css('div.document-info-item::text')[11].get()\n applied_for_invest = response.xpath('//*[@class=\"date\"]/text()')[1].getall()\n\n contact_name = response.css('p.card-title-subtitle::text').get()\n contact_phone = response.css('p.tel > span::text').get()\n contact_mail = response.xpath('//*[@class =\"person-contact\"]/p/a/span/text()').get()\n Adresse = ' '.join(response.xpath('//*[@class =\"address\"]//text()').getall())\n\n yield {'Startup': startup_name,\n 'Homepage': startup_hompage,\n 'Description': startup_description,\n 'Branche': branche,\n 'Gründungsdatum': founded,\n 'Anzahl Mitarbeiter': employees,\n 'Kapital Bedarf': capital,\n 'Datum des Förderbescheids': applied_for_invest,\n 'Contact': contact_name,\n 'Telefon': contact_phone,\n 'E-Mail': contact_mail,\n 'Adresse': Adresse}\n\n",
"you need to run in prompt:\nscrapy crawl -o filename.(json or csv)\n"
] |
[
1,
0
] |
[] |
[] |
[
"python",
"scrapy",
"web_crawler",
"web_scraping"
] |
stackoverflow_0074576610_python_scrapy_web_crawler_web_scraping.txt
|
Q:
Unable to click through option boxes in Selenium
I am trying to make a Python script that will check appointment availability and inform me when an earlier date opens up.
I am stuck at the 4th selection page, for locations. I can't seem to click the 'regions' to display the available actual locations.
This is what I have:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_experimental_option("detach", True)
browser = webdriver.Chrome(options=options)
browser.get('url')
medical = browser.find_element(By.XPATH, "//button[@class='btn btn-primary next-button show-loading-text']")
medical.click()
timesensitive = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.XPATH,"//button[@class='btn btn-primary next-button show-loading-text']")))
timesensitive.click()
test = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.XPATH,"//button[@class='btn btn-primary next-button show-loading-text']")))
test.click()
region = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.XPATH,"//label[@class='btn btn-sm btn-default'][4]")))
region.click()
A:
You miss the white space, it should be 'btn btn-sm btn-default '
region = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.XPATH,"//label[@class='btn btn-sm btn-default '][4]")))
region.click()
A:
You may stumble now and again on attribute values containing all sort of spaces at beginning/end you might miss. Here is one solution to avoid such pitfalls:
region = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.XPATH,"//label[contains(@class, 'btn btn-sm btn-default')][4]")))
region.click()
Also, considering you are waiting on all elements to load, why not define wait at the beginning, then just use it?
wait = WebDriverWait(driver, 10)
And then the code becomes:
[...]
timesensitive = wait.until(EC.presence_of_element_located((By.XPATH,"//button[@class='btn btn-primary next-button show-loading-text']")))
timesensitive.click()
test = wait.until(EC.presence_of_element_located((By.XPATH,"//button[@class='btn btn-primary next-button show-loading-text']")))
test.click()
region = wait.until(EC.presence_of_element_located((By.XPATH,"//label[contains(@class, 'btn btn-sm btn-default')][4]")))
region.click()
Also, here is a more robust way of selecting the desired region (obviously you know the name of it - 'SE'):
wait.until(EC.presence_of_element_located((By.XPATH,'//label/input[@value="SE"]'))).click()
Selenium documentation can be found here.
|
Unable to click through option boxes in Selenium
|
I am trying to make a Python script that will check appointment availability and inform me when an earlier date opens up.
I am stuck at the 4th selection page, for locations. I can't seem to click the 'regions' to display the available actual locations.
This is what I have:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_experimental_option("detach", True)
browser = webdriver.Chrome(options=options)
browser.get('url')
medical = browser.find_element(By.XPATH, "//button[@class='btn btn-primary next-button show-loading-text']")
medical.click()
timesensitive = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.XPATH,"//button[@class='btn btn-primary next-button show-loading-text']")))
timesensitive.click()
test = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.XPATH,"//button[@class='btn btn-primary next-button show-loading-text']")))
test.click()
region = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.XPATH,"//label[@class='btn btn-sm btn-default'][4]")))
region.click()
|
[
"You miss the white space, it should be 'btn btn-sm btn-default '\nregion = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.XPATH,\"//label[@class='btn btn-sm btn-default '][4]\")))\nregion.click()\n\n",
"You may stumble now and again on attribute values containing all sort of spaces at beginning/end you might miss. Here is one solution to avoid such pitfalls:\nregion = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.XPATH,\"//label[contains(@class, 'btn btn-sm btn-default')][4]\")))\nregion.click()\n\nAlso, considering you are waiting on all elements to load, why not define wait at the beginning, then just use it?\nwait = WebDriverWait(driver, 10)\n\nAnd then the code becomes:\n[...]\n\ntimesensitive = wait.until(EC.presence_of_element_located((By.XPATH,\"//button[@class='btn btn-primary next-button show-loading-text']\")))\ntimesensitive.click()\n\ntest = wait.until(EC.presence_of_element_located((By.XPATH,\"//button[@class='btn btn-primary next-button show-loading-text']\")))\ntest.click()\n\nregion = wait.until(EC.presence_of_element_located((By.XPATH,\"//label[contains(@class, 'btn btn-sm btn-default')][4]\")))\nregion.click()\n\nAlso, here is a more robust way of selecting the desired region (obviously you know the name of it - 'SE'):\nwait.until(EC.presence_of_element_located((By.XPATH,'//label/input[@value=\"SE\"]'))).click()\n\nSelenium documentation can be found here.\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"selenium",
"selenium_webdriver",
"web_scraping"
] |
stackoverflow_0074580360_python_selenium_selenium_webdriver_web_scraping.txt
|
Q:
How can I make the origin of the x-axis and the origin of the y-axis of a plot overlap in matplotlib?
I have a simple graph to make, whose source code is below:
import pandas as pd
def plot_responses(index, y):
index='Arsen initial'
y=pd.Series({1: 0.8, 2: 0.8, 3: 0.59, 4: 0.54, 5: 0.86, 6: 0.54, 7: 0.97, 8: 0.69, 9: 1.39, 10: 0.95, 11: 2.12, 12: 1.95, 13: 0.99, 14: 0.76, 15: 0.82, 16: 0.63, 17: 1.09, 18: 0.9, 19: 1.0, 20: 0.84, 21: 0.71, 22: 0.71, 23: 0.59, 24: 0.58, 25: 1.66, 26: 1.48, 27: 1.71, 28: 1.69, 29: 1.98, 30: 1.22, 31: 1.09, 32: 1.41, 33: 1.11, 34: 0.83, 35: 4.11, 36: 4.81, 37: 5.28, 38: 4.87, 39: 4.66, 40: 5.1, 41: 0.61, 42: 0.58, 43: 0.74, 44: 0.43, 45: 0.69, 46: 0.43, 47: 0.62, 48: 0.2, 49: 0.77, 50: 0.93, 51: 0.56, 52: 0.77, 53: 0.91, 54: 0.55, 55: 1.15, 56: 0.53, 57: 0.62, 58: 0.42, 59: 0.55, 60: 0.41, 61: 0.67, 62: 0.5, 63: 0.72, 64: 0.53, 65: 0.77, 66: 0.68, 67: 0.65, 68: 0.42, 69: 0.59, 70: 0.3, 71: 0.8, 72: 0.54, 73: 0.61, 74: 0.77, 75: 0.8, 76: 0.37, 77: 1.21, 78: 0.73, 79: 0.81, 80: 0.8, 81: 0.45, 82: 0.43})
values_for_5 = []
values_for_30 = []
for i in range(1, y.size + 1):
if i % 2 == 0:
values_for_5.append(y[i])
else:
values_for_30.append(y[i])
sample_no_for_5 = [i for i in range(1, len(y), 2)]
sample_no_for_30 = [i + 1 for i in range(1, len(y), 2)]
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1)
ax.spines['left'].set_position('zero')
ax.plot(sample_no_for_5, values_for_5, 'yo')
ax.plot(sample_no_for_30, values_for_30, 'bo')
plt.xlabel('Numarul mostrei', fontsize=15)
plt.ylabel(index, fontsize=15)
plt.title('Continutul de ' + str(index.replace(' initial', '')) + ' din gudronul acid netratat')
for i in range(len(values_for_30)):
ax.text(sample_no_for_5[i], values_for_5[i], '5')
ax.text(sample_no_for_30[i], values_for_30[i], '30')
plt.xticks(range(0, y.size, 5))
ax.grid()
plt.savefig(str(index.replace('initial', '').strip()) + '.jpg')
plt.show()
Running this code, I get the following figure:
Which is pretty good, but I want two more things and I don't know how to do it:
I want to make the origin of the y-axis overlap to the origin of the x-axis (the two zeros to have the same place);
I want to get rid of the grid lines from the left of the y-axis
How do I do that?
A:
ax.spines['left'].set_position('zero')
ax.spines['bottom'].set_position('zero')
ax.set_xlim(0, y.size)
ax.set_ylim(0, max(y) + 0.5)
this helps you to overlap both axises at (0,0) and gets rid of the lines that exceed the plot.
A:
Use this functions xlim() and ylim()
Link: https://pythonguides.com/matplotlib-set-axis-range/
|
How can I make the origin of the x-axis and the origin of the y-axis of a plot overlap in matplotlib?
|
I have a simple graph to make, whose source code is below:
import pandas as pd
def plot_responses(index, y):
index='Arsen initial'
y=pd.Series({1: 0.8, 2: 0.8, 3: 0.59, 4: 0.54, 5: 0.86, 6: 0.54, 7: 0.97, 8: 0.69, 9: 1.39, 10: 0.95, 11: 2.12, 12: 1.95, 13: 0.99, 14: 0.76, 15: 0.82, 16: 0.63, 17: 1.09, 18: 0.9, 19: 1.0, 20: 0.84, 21: 0.71, 22: 0.71, 23: 0.59, 24: 0.58, 25: 1.66, 26: 1.48, 27: 1.71, 28: 1.69, 29: 1.98, 30: 1.22, 31: 1.09, 32: 1.41, 33: 1.11, 34: 0.83, 35: 4.11, 36: 4.81, 37: 5.28, 38: 4.87, 39: 4.66, 40: 5.1, 41: 0.61, 42: 0.58, 43: 0.74, 44: 0.43, 45: 0.69, 46: 0.43, 47: 0.62, 48: 0.2, 49: 0.77, 50: 0.93, 51: 0.56, 52: 0.77, 53: 0.91, 54: 0.55, 55: 1.15, 56: 0.53, 57: 0.62, 58: 0.42, 59: 0.55, 60: 0.41, 61: 0.67, 62: 0.5, 63: 0.72, 64: 0.53, 65: 0.77, 66: 0.68, 67: 0.65, 68: 0.42, 69: 0.59, 70: 0.3, 71: 0.8, 72: 0.54, 73: 0.61, 74: 0.77, 75: 0.8, 76: 0.37, 77: 1.21, 78: 0.73, 79: 0.81, 80: 0.8, 81: 0.45, 82: 0.43})
values_for_5 = []
values_for_30 = []
for i in range(1, y.size + 1):
if i % 2 == 0:
values_for_5.append(y[i])
else:
values_for_30.append(y[i])
sample_no_for_5 = [i for i in range(1, len(y), 2)]
sample_no_for_30 = [i + 1 for i in range(1, len(y), 2)]
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1)
ax.spines['left'].set_position('zero')
ax.plot(sample_no_for_5, values_for_5, 'yo')
ax.plot(sample_no_for_30, values_for_30, 'bo')
plt.xlabel('Numarul mostrei', fontsize=15)
plt.ylabel(index, fontsize=15)
plt.title('Continutul de ' + str(index.replace(' initial', '')) + ' din gudronul acid netratat')
for i in range(len(values_for_30)):
ax.text(sample_no_for_5[i], values_for_5[i], '5')
ax.text(sample_no_for_30[i], values_for_30[i], '30')
plt.xticks(range(0, y.size, 5))
ax.grid()
plt.savefig(str(index.replace('initial', '').strip()) + '.jpg')
plt.show()
Running this code, I get the following figure:
Which is pretty good, but I want two more things and I don't know how to do it:
I want to make the origin of the y-axis overlap to the origin of the x-axis (the two zeros to have the same place);
I want to get rid of the grid lines from the left of the y-axis
How do I do that?
|
[
"ax.spines['left'].set_position('zero')\nax.spines['bottom'].set_position('zero')\n\nax.set_xlim(0, y.size)\nax.set_ylim(0, max(y) + 0.5)\n\nthis helps you to overlap both axises at (0,0) and gets rid of the lines that exceed the plot.\n\n",
"Use this functions xlim() and ylim()\nLink: https://pythonguides.com/matplotlib-set-axis-range/\n"
] |
[
1,
1
] |
[] |
[] |
[
"matplotlib",
"pandas",
"python",
"series"
] |
stackoverflow_0074580527_matplotlib_pandas_python_series.txt
|
Q:
Allow to user to Enter only numbers and disable letters
can you help me
I have this code
Label (self.window,width=55,text=":Enter your wight ").pack ()
self.kg = StringVar ()
Entry (self.window,width=55, textvariable=self.kg).pack ()
And I want to allow user to enter Numbers only
and I want user Enter maximum 3 numbers and maximum The number 250
please help me And Thank you!
A:
Too late but here you go:
def comm(self):
def val():
try:
int(entry.get())
if len(entry.get()) <= 3:
sum = 250 - int(entry.get())
if sum < 0:
entry.delete(0, 'end')
else:
entry.delete(0, 'end')
except:
entry.delete(0, 'end')
root.after(1, val)
entry.bind('<Key>',comm)
replace 'entry' with the Entry name of yours, also the same goes for root
|
Allow to user to Enter only numbers and disable letters
|
can you help me
I have this code
Label (self.window,width=55,text=":Enter your wight ").pack ()
self.kg = StringVar ()
Entry (self.window,width=55, textvariable=self.kg).pack ()
And I want to allow user to enter Numbers only
and I want user Enter maximum 3 numbers and maximum The number 250
please help me And Thank you!
|
[
"Too late but here you go:\ndef comm(self):\n def val():\n try:\n int(entry.get())\n if len(entry.get()) <= 3:\n sum = 250 - int(entry.get())\n if sum < 0:\n entry.delete(0, 'end')\n else:\n entry.delete(0, 'end')\n except:\n entry.delete(0, 'end')\n\n root.after(1, val)\n\nentry.bind('<Key>',comm)\n\nreplace 'entry' with the Entry name of yours, also the same goes for root\n"
] |
[
0
] |
[] |
[] |
[
"letter",
"numbers",
"python",
"user_interface"
] |
stackoverflow_0066558547_letter_numbers_python_user_interface.txt
|
Q:
TypeError: descriptor 'collidelist' for 'pygame.Rect' objects doesn't apply to a 'list' object
Trying to set up a system to earn score points by killing enemies. but i keep getting: TypeError: descriptor 'collidelist' for 'pygame.Rect' objects doesn't apply to a 'list' object. But it worked one line before.
This is the first program that ive trying to write on my own. im still very new to this so any help would be appreciated.
code:
import pygame
from random import randint
from sys import exit
pygame.init()
def star_movement(stars):
if stars:
for star_rect in stars:
star_rect.x -= 5
if star_rect.x >= 0: screen.blit(star_surf,star_rect)
star_list = [star for star in stars if star.x > -100]
return star_list
else: return []
def fireb_movement(fire_ball):
if fire_ball:
for fire_rect in fire_ball:
fire_rect.x += 20
if fire_rect.x >= 0: screen.blit(Fire_ball,fire_rect)
fireB_list = [fireB for fireB in fire_ball if fireB.x > -100]
return fireB_list
else: return []
def enemy_movement(enemy_rect_list):
if enemy_rect_list:
for enemy_rect in enemy_rect_list:
enemy_rect.x -= 5
if enemy_rect.x >= 0: screen.blit(enemy_surf,enemy_rect)
enemy_list = [enemy for enemy in enemy_rect_list if enemy.x > -100]
return enemy_list
else: return []
game_active = False
score = 0
stars = []
enemy_rect_list = []
fire_ball = []
time = float(pygame.time.get_ticks()/1000)
screen = pygame.display.set_mode((800,400))
game_icon = pygame.image.load("Graphics/pixels.png").convert_alpha()
pygame.display.set_icon(game_icon)
pygame.display.set_caption("Quick Dash")
clock = pygame.time.Clock()
player_surf = pygame.transform.rotate(pygame.image.load("Graphics/ships/pix_ship.png").convert_alpha(), -90)
player_rect = player_surf.get_rect(center = (100, 200))
enemy_surf = pygame.transform.rotate(pygame.image.load("Graphics/ships/pix_ship2.png").convert_alpha(), 90)
Fire_ball = pygame.transform.rotate(pygame.image.load("Graphics/ships/fireball.png").convert_alpha(), -90)
background_surf = pygame.image.load("Graphics/space.png").convert_alpha()
background_rect = background_surf.get_rect(center = (400,200))
homepage_surf = pygame.image.load("Graphics/homespace.png").convert_alpha()
homepage_rect = homepage_surf.get_rect(topleft = (0,0))
star_surf = pygame.image.load("Graphics/star.png").convert_alpha()
star_a = pygame.USEREVENT + 2
pygame.time.set_timer(star_a, 200)
enemy_wave = pygame.USEREVENT + 3
pygame.time.set_timer(enemy_wave, 1500)
font = pygame.font.Font("fonts/PublicPixel-z84yD.ttf", 20)
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.QUIT
exit()
if game_active:
if event.type == star_a:
stars.append(star_surf.get_rect(center=(randint(800, 1600), randint(0, 400))))
if event.type == enemy_wave:
enemy_rect_list.append(enemy_surf.get_rect(center=(randint(800, 1600), randint(0, 400))))
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_UP:
player_rect.y += -50
if event.key == pygame.K_DOWN:
player_rect.y += 50
if player_rect.collidepoint((100, 400)):
player_rect.y = 300
if player_rect.collidepoint((100, -55)):
player_rect.y = -15
if pygame.Rect.collidelist(player_rect, enemy_rect_list) != -1:
game_active = False
if pygame.Rect.collidelist(fire_ball, enemy_rect_list) != -1:
score += 1
if event.type == pygame.KEYDOWN and event.key == pygame.K_SPACE:
fire_ball.append(Fire_ball.get_rect(center =(110,(player_rect.y + 60))))
else:
if event.type == pygame.KEYDOWN and event.key == pygame.K_SPACE:
game_active = True
if game_active:
screen.blit(background_surf, background_rect)
stars = star_movement(stars)
enemy_rect_list = enemy_movement(enemy_rect_list)
fire_ball = fireb_movement(fire_ball)
screen.blit(player_surf, player_rect)
main_message2 = font.render(score, False, "#11339c")
main_message_rect = main_message2.get_rect(center=(400, 100))
else:
main_message = font.render(f"Press SPACE to start!", False, "#11339c")
main_message_rect = main_message.get_rect(center=(400, 100))
screen.blit(homepage_surf, homepage_rect)
screen.blit(main_message, main_message_rect)
pygame.display.update()
clock.tick(60)
error:
Traceback (most recent call last):
File "C:\Users\User\PycharmProjects\pythonProject\venv\game1.py", line 97, in <module>
if pygame.Rect.collidelist(fire_ball, enemy_rect_list) != -1:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: descriptor 'collidelist' for 'pygame.Rect' objects doesn't apply to a 'list' object
A:
fire_ball is a list. However, collidelist is there to detect collisions between a single rectangle and a list of rectangles. If you want to detect the collision between 2 lists of rectangles, you must do it in a loop:
for ball in fire_ball:
if ball.collidelist(enemy_rect_list) >= 0:
score += 1
Alternatively you can use pygame.sprite.Sprite, pygame.sprite.Group and pygame.sprite.groupcollide (also see Permanently delete sprite from memory Pygame).
|
TypeError: descriptor 'collidelist' for 'pygame.Rect' objects doesn't apply to a 'list' object
|
Trying to set up a system to earn score points by killing enemies. but i keep getting: TypeError: descriptor 'collidelist' for 'pygame.Rect' objects doesn't apply to a 'list' object. But it worked one line before.
This is the first program that ive trying to write on my own. im still very new to this so any help would be appreciated.
code:
import pygame
from random import randint
from sys import exit
pygame.init()
def star_movement(stars):
if stars:
for star_rect in stars:
star_rect.x -= 5
if star_rect.x >= 0: screen.blit(star_surf,star_rect)
star_list = [star for star in stars if star.x > -100]
return star_list
else: return []
def fireb_movement(fire_ball):
if fire_ball:
for fire_rect in fire_ball:
fire_rect.x += 20
if fire_rect.x >= 0: screen.blit(Fire_ball,fire_rect)
fireB_list = [fireB for fireB in fire_ball if fireB.x > -100]
return fireB_list
else: return []
def enemy_movement(enemy_rect_list):
if enemy_rect_list:
for enemy_rect in enemy_rect_list:
enemy_rect.x -= 5
if enemy_rect.x >= 0: screen.blit(enemy_surf,enemy_rect)
enemy_list = [enemy for enemy in enemy_rect_list if enemy.x > -100]
return enemy_list
else: return []
game_active = False
score = 0
stars = []
enemy_rect_list = []
fire_ball = []
time = float(pygame.time.get_ticks()/1000)
screen = pygame.display.set_mode((800,400))
game_icon = pygame.image.load("Graphics/pixels.png").convert_alpha()
pygame.display.set_icon(game_icon)
pygame.display.set_caption("Quick Dash")
clock = pygame.time.Clock()
player_surf = pygame.transform.rotate(pygame.image.load("Graphics/ships/pix_ship.png").convert_alpha(), -90)
player_rect = player_surf.get_rect(center = (100, 200))
enemy_surf = pygame.transform.rotate(pygame.image.load("Graphics/ships/pix_ship2.png").convert_alpha(), 90)
Fire_ball = pygame.transform.rotate(pygame.image.load("Graphics/ships/fireball.png").convert_alpha(), -90)
background_surf = pygame.image.load("Graphics/space.png").convert_alpha()
background_rect = background_surf.get_rect(center = (400,200))
homepage_surf = pygame.image.load("Graphics/homespace.png").convert_alpha()
homepage_rect = homepage_surf.get_rect(topleft = (0,0))
star_surf = pygame.image.load("Graphics/star.png").convert_alpha()
star_a = pygame.USEREVENT + 2
pygame.time.set_timer(star_a, 200)
enemy_wave = pygame.USEREVENT + 3
pygame.time.set_timer(enemy_wave, 1500)
font = pygame.font.Font("fonts/PublicPixel-z84yD.ttf", 20)
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.QUIT
exit()
if game_active:
if event.type == star_a:
stars.append(star_surf.get_rect(center=(randint(800, 1600), randint(0, 400))))
if event.type == enemy_wave:
enemy_rect_list.append(enemy_surf.get_rect(center=(randint(800, 1600), randint(0, 400))))
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_UP:
player_rect.y += -50
if event.key == pygame.K_DOWN:
player_rect.y += 50
if player_rect.collidepoint((100, 400)):
player_rect.y = 300
if player_rect.collidepoint((100, -55)):
player_rect.y = -15
if pygame.Rect.collidelist(player_rect, enemy_rect_list) != -1:
game_active = False
if pygame.Rect.collidelist(fire_ball, enemy_rect_list) != -1:
score += 1
if event.type == pygame.KEYDOWN and event.key == pygame.K_SPACE:
fire_ball.append(Fire_ball.get_rect(center =(110,(player_rect.y + 60))))
else:
if event.type == pygame.KEYDOWN and event.key == pygame.K_SPACE:
game_active = True
if game_active:
screen.blit(background_surf, background_rect)
stars = star_movement(stars)
enemy_rect_list = enemy_movement(enemy_rect_list)
fire_ball = fireb_movement(fire_ball)
screen.blit(player_surf, player_rect)
main_message2 = font.render(score, False, "#11339c")
main_message_rect = main_message2.get_rect(center=(400, 100))
else:
main_message = font.render(f"Press SPACE to start!", False, "#11339c")
main_message_rect = main_message.get_rect(center=(400, 100))
screen.blit(homepage_surf, homepage_rect)
screen.blit(main_message, main_message_rect)
pygame.display.update()
clock.tick(60)
error:
Traceback (most recent call last):
File "C:\Users\User\PycharmProjects\pythonProject\venv\game1.py", line 97, in <module>
if pygame.Rect.collidelist(fire_ball, enemy_rect_list) != -1:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: descriptor 'collidelist' for 'pygame.Rect' objects doesn't apply to a 'list' object
|
[
"fire_ball is a list. However, collidelist is there to detect collisions between a single rectangle and a list of rectangles. If you want to detect the collision between 2 lists of rectangles, you must do it in a loop:\nfor ball in fire_ball:\n if ball.collidelist(enemy_rect_list) >= 0:\n score += 1\n\n\nAlternatively you can use pygame.sprite.Sprite, pygame.sprite.Group and pygame.sprite.groupcollide (also see Permanently delete sprite from memory Pygame).\n"
] |
[
0
] |
[] |
[] |
[
"pygame",
"python",
"python_3.x"
] |
stackoverflow_0074580611_pygame_python_python_3.x.txt
|
Q:
Image size during training in yolov5
I am trying to train a custom dataset in yolov5.
So I am trying to run it with an image size of 640x480 but it is not working.
python3 /YOLOv5/yolov5/train.py --img-size 640 480 --batch 8 --epochs 300 --data data.yaml --weights yolov5s.pt --cache
usage: train.py [-h] [--weights WEIGHTS] [--cfg CFG] [--data DATA] [--hyp HYP] [--epochs EPOCHS]
[--batch-size BATCH_SIZE] [--imgsz IMGSZ] [--rect] [--resume [RESUME]] [--nosave]
[--noval] [--noautoanchor] [--noplots] [--evolve [EVOLVE]] [--bucket BUCKET]
[--cache [CACHE]] [--image-weights] [--device DEVICE] [--multi-scale]
[--single-cls] [--optimizer {SGD,Adam,AdamW}] [--sync-bn] [--workers WORKERS]
[--project PROJECT] [--name NAME] [--exist-ok] [--quad] [--cos-lr]
[--label-smoothing LABEL_SMOOTHING] [--patience PATIENCE]
[--freeze FREEZE [FREEZE ...]] [--save-period SAVE_PERIOD]
[--local_rank LOCAL_RANK] [--entity ENTITY] [--upload_dataset [UPLOAD_DATASET]]
[--bbox_interval BBOX_INTERVAL] [--artifact_alias ARTIFACT_ALIAS]
train.py: error: unrecognized arguments: 480
def parse_opt(known=False):
parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)')
(https://github.com/ultralytics/yolov5.git)
A:
--img-size
only takes one argument. Use:
python3 /YOLOv5/yolov5/train.py --img-size 640 --batch 8 --epochs 300 --data data.yaml --weights yolov5s.pt --cache
the height of the image will be adjusted accordingly, respecting the aspect ratio and stride needs.
|
Image size during training in yolov5
|
I am trying to train a custom dataset in yolov5.
So I am trying to run it with an image size of 640x480 but it is not working.
python3 /YOLOv5/yolov5/train.py --img-size 640 480 --batch 8 --epochs 300 --data data.yaml --weights yolov5s.pt --cache
usage: train.py [-h] [--weights WEIGHTS] [--cfg CFG] [--data DATA] [--hyp HYP] [--epochs EPOCHS]
[--batch-size BATCH_SIZE] [--imgsz IMGSZ] [--rect] [--resume [RESUME]] [--nosave]
[--noval] [--noautoanchor] [--noplots] [--evolve [EVOLVE]] [--bucket BUCKET]
[--cache [CACHE]] [--image-weights] [--device DEVICE] [--multi-scale]
[--single-cls] [--optimizer {SGD,Adam,AdamW}] [--sync-bn] [--workers WORKERS]
[--project PROJECT] [--name NAME] [--exist-ok] [--quad] [--cos-lr]
[--label-smoothing LABEL_SMOOTHING] [--patience PATIENCE]
[--freeze FREEZE [FREEZE ...]] [--save-period SAVE_PERIOD]
[--local_rank LOCAL_RANK] [--entity ENTITY] [--upload_dataset [UPLOAD_DATASET]]
[--bbox_interval BBOX_INTERVAL] [--artifact_alias ARTIFACT_ALIAS]
train.py: error: unrecognized arguments: 480
def parse_opt(known=False):
parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)')
(https://github.com/ultralytics/yolov5.git)
|
[
"--img-size\n\nonly takes one argument. Use:\npython3 /YOLOv5/yolov5/train.py --img-size 640 --batch 8 --epochs 300 --data data.yaml --weights yolov5s.pt --cache\n\nthe height of the image will be adjusted accordingly, respecting the aspect ratio and stride needs.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_3.x",
"pytorch",
"yolov5"
] |
stackoverflow_0074457702_python_python_3.x_pytorch_yolov5.txt
|
Q:
ValueError: The view **** didn't return an HttpResponse object. It returned None instead
I'm using Django forms to handle user input for some point on my Django app. but it keeps showing this error whenever the user tries to submit the form.
ValueError: The view *my view name goes here* didn't return an HttpResponse object. It returned None instead
Here's the code:
Forms.py
class sendBreachForm(forms.Form):
"""
Form for sending messages to users.
"""
text = forms.CharField(max_length=100)
image = forms.FileField()
cords = forms.CharField(widget=forms.TextInput(
attrs={"type":"hidden"}
))
views.py
@login_required
def web_app(request):
if request.user.is_staff or request.user.is_superuser:
return redirect('/ar/system/')
else:
if request.method == "POST":
form = sendBreachForm(request.POST)
print("AAAAAAAAa in a post request")
if form.is_valid():
print("AAAAAAAAa form is valid")
text = form.cleaned_data['text']
image = form.cleaned_data['image']
cords = form.cleaned_data['cords']
try:
new_breach = Breach.object.create(text=text,image=image)
add_form_cords_to_breach(request,new_breach,cords)
print("AAAAAAAA added breach")
return render(request,"web_app.html",context)
except :
print("AAAAAAAA error ")
return render(request,"web_app.html",context)
# raise Http404('wrong data')
else:
form = sendBreachForm()
context = {}
context['form']=form
context['all_elements'] = WaterElement.objects.all()
current_site = Site.objects.get_current()
the_domain = current_site.domain
context['domain'] = the_domain
all_layers = MapLayers.objects.all()
context['all_layers']=all_layers
return render(request,"web_app.html",context)
HTML
<form method ='post'>
{% csrf_token %}
{{form.text}}
<label for="text">وصف المعاينة</label>
{{form.image}}
<label for="image">صورة المعاينة</label>
{{form.cords}}
<input type="submit" value = "إرسال المعاينة">
</form>
A:
The error makes complete sense, the view should return some response in all the conditions, currently you have both if and else condition for everything, except if form.is_valid() so also maintain in that.
@login_required
def web_app(request):
if request.user.is_staff or request.user.is_superuser:
return redirect('/ar/system/')
else:
if request.method == "POST":
form = sendBreachForm(request.POST)
print("AAAAAAAAa in a post request")
if form.is_valid():
print("AAAAAAAAa form is valid")
text = form.cleaned_data['text']
image = form.cleaned_data['image']
cords = form.cleaned_data['cords']
try:
new_breach = Breach.object.create(text=text,image=image)
add_form_cords_to_breach(request,new_breach,cords)
print("AAAAAAAA added breach")
return render(request,"web_app.html",context)
except:
print("AAAAAAAA error ")
return render(request,"web_app.html",context)
# raise Http404('wrong data')
else:
print("form is not valid")
messages.error(request,"form is not valid, kindly enter correct details")
return redirect("some_error_page")
else:
form = sendBreachForm()
context = {}
context['form']=form
context['all_elements'] = WaterElement.objects.all()
current_site = Site.objects.get_current()
the_domain = current_site.domain
context['domain'] = the_domain
all_layers = MapLayers.objects.all()
context['all_layers']=all_layers
return render(request,"web_app.html",context)
|
ValueError: The view **** didn't return an HttpResponse object. It returned None instead
|
I'm using Django forms to handle user input for some point on my Django app. but it keeps showing this error whenever the user tries to submit the form.
ValueError: The view *my view name goes here* didn't return an HttpResponse object. It returned None instead
Here's the code:
Forms.py
class sendBreachForm(forms.Form):
"""
Form for sending messages to users.
"""
text = forms.CharField(max_length=100)
image = forms.FileField()
cords = forms.CharField(widget=forms.TextInput(
attrs={"type":"hidden"}
))
views.py
@login_required
def web_app(request):
if request.user.is_staff or request.user.is_superuser:
return redirect('/ar/system/')
else:
if request.method == "POST":
form = sendBreachForm(request.POST)
print("AAAAAAAAa in a post request")
if form.is_valid():
print("AAAAAAAAa form is valid")
text = form.cleaned_data['text']
image = form.cleaned_data['image']
cords = form.cleaned_data['cords']
try:
new_breach = Breach.object.create(text=text,image=image)
add_form_cords_to_breach(request,new_breach,cords)
print("AAAAAAAA added breach")
return render(request,"web_app.html",context)
except :
print("AAAAAAAA error ")
return render(request,"web_app.html",context)
# raise Http404('wrong data')
else:
form = sendBreachForm()
context = {}
context['form']=form
context['all_elements'] = WaterElement.objects.all()
current_site = Site.objects.get_current()
the_domain = current_site.domain
context['domain'] = the_domain
all_layers = MapLayers.objects.all()
context['all_layers']=all_layers
return render(request,"web_app.html",context)
HTML
<form method ='post'>
{% csrf_token %}
{{form.text}}
<label for="text">وصف المعاينة</label>
{{form.image}}
<label for="image">صورة المعاينة</label>
{{form.cords}}
<input type="submit" value = "إرسال المعاينة">
</form>
|
[
"The error makes complete sense, the view should return some response in all the conditions, currently you have both if and else condition for everything, except if form.is_valid() so also maintain in that.\n@login_required\ndef web_app(request):\n if request.user.is_staff or request.user.is_superuser:\n return redirect('/ar/system/')\n else:\n if request.method == \"POST\":\n form = sendBreachForm(request.POST)\n print(\"AAAAAAAAa in a post request\")\n if form.is_valid():\n print(\"AAAAAAAAa form is valid\")\n text = form.cleaned_data['text']\n image = form.cleaned_data['image']\n cords = form.cleaned_data['cords']\n try:\n new_breach = Breach.object.create(text=text,image=image)\n add_form_cords_to_breach(request,new_breach,cords) \n print(\"AAAAAAAA added breach\")\n return render(request,\"web_app.html\",context) \n except:\n print(\"AAAAAAAA error \")\n return render(request,\"web_app.html\",context) \n # raise Http404('wrong data')\n else:\n print(\"form is not valid\")\n \n messages.error(request,\"form is not valid, kindly enter correct details\")\n return redirect(\"some_error_page\")\n \n else:\n form = sendBreachForm()\n context = {}\n context['form']=form\n context['all_elements'] = WaterElement.objects.all()\n current_site = Site.objects.get_current()\n the_domain = current_site.domain\n context['domain'] = the_domain\n all_layers = MapLayers.objects.all()\n context['all_layers']=all_layers\n return render(request,\"web_app.html\",context)\n\n\n"
] |
[
2
] |
[] |
[] |
[
"django",
"django_forms",
"django_templates",
"django_views",
"python"
] |
stackoverflow_0074580563_django_django_forms_django_templates_django_views_python.txt
|
Q:
Why are Pytorch transform functions not being differentiated with autograd?
I have been trying to write a set of transforms on input data. I also need the transforms to be differentiable to compute the gradients. However, gradients do not seem to be calculated for the resize, normalize transforms.
from torchvision import transforms
from torchvision.transforms import ToTensor
resize = transforms.Resize(size=224, interpolation=transforms.InterpolationMode.BICUBIC, max_size=None, antialias=None)
crop = transforms.CenterCrop(size=(224, 224))
normalize = transforms.Normalize(mean=(0.48145466, 0.4578275, 0.40821073), std=(0.26862954, 0.26130258, 0.27577711))
img = torch.Tensor(images[30])
img.requires_grad = True
rgb = torch.dsplit(torch.Tensor(img),3)
transformed = torch.stack(rgb).reshape(3,100,100)
resized = resize.forward(transformed)
normalized = normalize.forward(resized)
image_features = clip_model.encode_image(normalized.unsqueeze(0).to(device))
text_features = clip_model.encode_text(text_inputs)
similarity = (100.0 * image_features @ text_features.T).softmax(dim=-1)
When running normalized.backward(), there are no gradients for resized and transformed.
I have tried to find the gradient for each individual transform, but it still does not calculate the gradients.
A:
Trying to reproduce your error, what I get when backpropagating the gradient from normalized is:
RuntimeError: grad can be implicitly created only for scalar outputs
What this error means is that the tensor you are calling backward onto should be a scalar and not a vector or multi-dimensional tensor. Generally you would want to reduce the dimensionality for example by averaging or summing. For example you could do the following:
> normalized.mean().backward()
|
Why are Pytorch transform functions not being differentiated with autograd?
|
I have been trying to write a set of transforms on input data. I also need the transforms to be differentiable to compute the gradients. However, gradients do not seem to be calculated for the resize, normalize transforms.
from torchvision import transforms
from torchvision.transforms import ToTensor
resize = transforms.Resize(size=224, interpolation=transforms.InterpolationMode.BICUBIC, max_size=None, antialias=None)
crop = transforms.CenterCrop(size=(224, 224))
normalize = transforms.Normalize(mean=(0.48145466, 0.4578275, 0.40821073), std=(0.26862954, 0.26130258, 0.27577711))
img = torch.Tensor(images[30])
img.requires_grad = True
rgb = torch.dsplit(torch.Tensor(img),3)
transformed = torch.stack(rgb).reshape(3,100,100)
resized = resize.forward(transformed)
normalized = normalize.forward(resized)
image_features = clip_model.encode_image(normalized.unsqueeze(0).to(device))
text_features = clip_model.encode_text(text_inputs)
similarity = (100.0 * image_features @ text_features.T).softmax(dim=-1)
When running normalized.backward(), there are no gradients for resized and transformed.
I have tried to find the gradient for each individual transform, but it still does not calculate the gradients.
|
[
"Trying to reproduce your error, what I get when backpropagating the gradient from normalized is:\n\nRuntimeError: grad can be implicitly created only for scalar outputs\n\nWhat this error means is that the tensor you are calling backward onto should be a scalar and not a vector or multi-dimensional tensor. Generally you would want to reduce the dimensionality for example by averaging or summing. For example you could do the following:\n> normalized.mean().backward()\n\n"
] |
[
1
] |
[] |
[] |
[
"deep_learning",
"machine_learning",
"python",
"pytorch"
] |
stackoverflow_0074577705_deep_learning_machine_learning_python_pytorch.txt
|
Q:
Syntax for `apply` pandas function in ruby
I need to convert a python script into ruby. I use for that the gems Pandas and Numpy which make the work quite simple.
For example I have these kind of lines:
# python
# DF is a dataframe from Pandas
DF['VAL'].ewm(span = vDAY).mean()
DF['VOLAT'].rolling(vDAY).std()
so no question asked, I convert like this:
# ruby
df['VAL'].ewm(span: vDAY).mean
df['VOLAT'].rolling(vDAY).std
easy.
But I have a function apply from Pandas which takes a function as first argument and I really don't know how to convert it in ruby.
It's something like that :
# python
import numpy as np
DF['VAL'].rolling(vDAY).apply(lambda x: np.polyfit(range(len(x)), x, 1)[0])
# output=> NaN or Float
I tried to decomposed the lambda like this:
# ruby
polyfit = ->(x) { t = Numpy.polyfit((0...x.size).to_a, x, 1); t[0] }
puts polyfit.call(<insert Array argument>)
#=> I have a satisfying output for my lambda
# but...
df['VAL'].rolling(vDAY).apply(&polyfit)
# output=> `apply': <class 'TypeError'>: must be real number, not NoneType (PyCall::PyError)
# or
df['VAL'].rolling(vDAY).apply{ |x| polyfit.call(x) }
# output=> `apply': <class 'TypeError'>: apply() missing 1 required positional argument: 'func' (PyCall::PyError)
# or
df['VAL'].rolling(vDAY).apply(polyfit)
#output=> `apply': <class 'TypeError'>: must be real number, not NoneType (PyCall::PyError)
# or
df['VAL'].rolling(vDAY).apply(:polyfit)
# output=> `apply': <class 'TypeError'>: 'str' object is not callable (PyCall::PyError)
It's obviously not working. The problem is this "x" argument in the python inline syntax that I really don't know how to get it "the ruby way"
If someone can "translate" this apply function from python syntax to ruby, it would be really nice :)
I just want to point out that I'm a ruby/rails developer and I don't know python professionally speaking.
UPDATE:
Ok, it's a complete misunderstanding of python code for my part: apply needs a function argument as a callable object. So in ruby it's not a lambda but a Proc I need.
So the solution for those who encounter the same problem:
# ruby
polyfit = Proc.new { t = Numpy.polyfit((0...x.size).to_a, x, 1); t[0] }
df['VAL'].rolling(vDAY).apply(polyfit)
A:
If someone can "translate" this apply function from python syntax to ruby, it would be really nice
The equivalent Ruby syntax is:
DF['VAL'].rolling(vDAY).apply(-> x { np.polyfit(range(len(x)), x, 1)[0] })
|
Syntax for `apply` pandas function in ruby
|
I need to convert a python script into ruby. I use for that the gems Pandas and Numpy which make the work quite simple.
For example I have these kind of lines:
# python
# DF is a dataframe from Pandas
DF['VAL'].ewm(span = vDAY).mean()
DF['VOLAT'].rolling(vDAY).std()
so no question asked, I convert like this:
# ruby
df['VAL'].ewm(span: vDAY).mean
df['VOLAT'].rolling(vDAY).std
easy.
But I have a function apply from Pandas which takes a function as first argument and I really don't know how to convert it in ruby.
It's something like that :
# python
import numpy as np
DF['VAL'].rolling(vDAY).apply(lambda x: np.polyfit(range(len(x)), x, 1)[0])
# output=> NaN or Float
I tried to decomposed the lambda like this:
# ruby
polyfit = ->(x) { t = Numpy.polyfit((0...x.size).to_a, x, 1); t[0] }
puts polyfit.call(<insert Array argument>)
#=> I have a satisfying output for my lambda
# but...
df['VAL'].rolling(vDAY).apply(&polyfit)
# output=> `apply': <class 'TypeError'>: must be real number, not NoneType (PyCall::PyError)
# or
df['VAL'].rolling(vDAY).apply{ |x| polyfit.call(x) }
# output=> `apply': <class 'TypeError'>: apply() missing 1 required positional argument: 'func' (PyCall::PyError)
# or
df['VAL'].rolling(vDAY).apply(polyfit)
#output=> `apply': <class 'TypeError'>: must be real number, not NoneType (PyCall::PyError)
# or
df['VAL'].rolling(vDAY).apply(:polyfit)
# output=> `apply': <class 'TypeError'>: 'str' object is not callable (PyCall::PyError)
It's obviously not working. The problem is this "x" argument in the python inline syntax that I really don't know how to get it "the ruby way"
If someone can "translate" this apply function from python syntax to ruby, it would be really nice :)
I just want to point out that I'm a ruby/rails developer and I don't know python professionally speaking.
UPDATE:
Ok, it's a complete misunderstanding of python code for my part: apply needs a function argument as a callable object. So in ruby it's not a lambda but a Proc I need.
So the solution for those who encounter the same problem:
# ruby
polyfit = Proc.new { t = Numpy.polyfit((0...x.size).to_a, x, 1); t[0] }
df['VAL'].rolling(vDAY).apply(polyfit)
|
[
"\nIf someone can \"translate\" this apply function from python syntax to ruby, it would be really nice\n\nThe equivalent Ruby syntax is:\nDF['VAL'].rolling(vDAY).apply(-> x { np.polyfit(range(len(x)), x, 1)[0] })\n\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"pycall",
"python",
"ruby"
] |
stackoverflow_0074578142_dataframe_pandas_pycall_python_ruby.txt
|
Q:
Singleton-comparison suggestion by pylint
For the given code
def greater(n):
if n > 3:
res = True
else:
res = False
return res
a = greater(5)
print(hex(id(a)))
print(hex(id(True)))
b = True
print(hex(id(b)))
if a == True:
print('yes')
else:
print('no')
pylint suggests pylint_example.py:16:4: C0121: Comparison 'a == True' should be 'a is True' if checking for the singleton value True, or 'a' if testing for truthiness (singleton-comparison)
a is True will check both address and value
and I cannot assume immutable variables will have the same address
Thus, changing a == True to a is True may lead to incorrect results (a and True may have different addresses in memory). Why does pylint suggest that?
Though
print(hex(id(a)))
print(hex(id(True)))
b = True
print(hex(id(b)))
part gives consistent results. I am not sure if that would work in general.
A:
True and False are unique singletons, not immutable. If a has the value True, then a and True do have the same memory address.
Source: PEP-0285 and In Python are the built in constants True and False unique?
A:
PEP 8 claims that correct way is to use if variable giving following example
if greeting:
and claims that
if greeting == True:
is wrong and
if greeting is True:
is worse.
|
Singleton-comparison suggestion by pylint
|
For the given code
def greater(n):
if n > 3:
res = True
else:
res = False
return res
a = greater(5)
print(hex(id(a)))
print(hex(id(True)))
b = True
print(hex(id(b)))
if a == True:
print('yes')
else:
print('no')
pylint suggests pylint_example.py:16:4: C0121: Comparison 'a == True' should be 'a is True' if checking for the singleton value True, or 'a' if testing for truthiness (singleton-comparison)
a is True will check both address and value
and I cannot assume immutable variables will have the same address
Thus, changing a == True to a is True may lead to incorrect results (a and True may have different addresses in memory). Why does pylint suggest that?
Though
print(hex(id(a)))
print(hex(id(True)))
b = True
print(hex(id(b)))
part gives consistent results. I am not sure if that would work in general.
|
[
"True and False are unique singletons, not immutable. If a has the value True, then a and True do have the same memory address.\nSource: PEP-0285 and In Python are the built in constants True and False unique?\n",
"PEP 8 claims that correct way is to use if variable giving following example\nif greeting:\n\nand claims that\nif greeting == True:\n\nis wrong and\nif greeting is True:\n\nis worse.\n"
] |
[
2,
0
] |
[] |
[] |
[
"pylint",
"python",
"python_3.x"
] |
stackoverflow_0074580659_pylint_python_python_3.x.txt
|
Q:
how to manipulate elements of a tensor if you have a set of indices? (torch.topk())
suppose i have a tensor and using torch.topk function i get the max k elements of a tensor and their indices. like the following code
>>> x = torch.arange(1., 6.)
>>> x
tensor([ 1., 2., 3., 4., 5.])
>>> torch.topk(x, 3)
torch.return_types.topk(values=tensor([5., 4., 3.]), indices=tensor([4, 3, 2]))
now suppose i want to set the above max k elements to 0 but keep them in the same position in original tensor. How can i do that?
if k was =3 then the new tensor should look like:
tensor([ 1., 2., 0., 0., 0.])
basically, How can i use the indeces of these max k elements (return of the topk() torch function) to zero(set their values to 0) the original values in these positions out?
preferred would be a suggestion of a torch method that does what im asking. if not it would be best for the solution to be as efficient as possible.
Thank you for the help in advance :)
A:
The torch.topk function returns the indices of the top-k elements on the provided dimension. You can perform the reassignment operation using torch.scatter:
>>> _, i = x.topk(k=3)
>>> x.scatter(dim=0, index=i, value=0)
tensor([1., 2., 0., 0., 0.])
|
how to manipulate elements of a tensor if you have a set of indices? (torch.topk())
|
suppose i have a tensor and using torch.topk function i get the max k elements of a tensor and their indices. like the following code
>>> x = torch.arange(1., 6.)
>>> x
tensor([ 1., 2., 3., 4., 5.])
>>> torch.topk(x, 3)
torch.return_types.topk(values=tensor([5., 4., 3.]), indices=tensor([4, 3, 2]))
now suppose i want to set the above max k elements to 0 but keep them in the same position in original tensor. How can i do that?
if k was =3 then the new tensor should look like:
tensor([ 1., 2., 0., 0., 0.])
basically, How can i use the indeces of these max k elements (return of the topk() torch function) to zero(set their values to 0) the original values in these positions out?
preferred would be a suggestion of a torch method that does what im asking. if not it would be best for the solution to be as efficient as possible.
Thank you for the help in advance :)
|
[
"The torch.topk function returns the indices of the top-k elements on the provided dimension. You can perform the reassignment operation using torch.scatter:\n>>> _, i = x.topk(k=3)\n>>> x.scatter(dim=0, index=i, value=0)\ntensor([1., 2., 0., 0., 0.])\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"pytorch",
"tensor"
] |
stackoverflow_0074577249_python_pytorch_tensor.txt
|
Q:
Getting data between two div or a tags in BeautifulSoup
I am working on a scraping project in which there is some data between two different divs and two different a tags and we want to fetch everything in between them.
Sample problem 1:
<div id ="startID"></div>
<table>
<tr>
data
</tr>
</table>
<p>Paragraph data</p>
<div id="endID"></div>
Expected outcome 1: basically it fetches everything in between those two div elements.
<table>
<tr>
data
</tr>
</table>
<p>Paragraph data</p>
I know how to get the data inside a div tag but to get the data between two div tags is problematic.
A:
You can use .next_sibling to iteratively extract text from the startID tag until you find the endID tag.
startID = soup.find(id="startID")
endID = soup.find(id="endID")
data = []
for sibling in startID.next_siblings:
if sibling == endID:
break
text = sibling.get_text(strip=True)
if text:
data.append(text)
output:
> data
['data', 'Paragraph data']
|
Getting data between two div or a tags in BeautifulSoup
|
I am working on a scraping project in which there is some data between two different divs and two different a tags and we want to fetch everything in between them.
Sample problem 1:
<div id ="startID"></div>
<table>
<tr>
data
</tr>
</table>
<p>Paragraph data</p>
<div id="endID"></div>
Expected outcome 1: basically it fetches everything in between those two div elements.
<table>
<tr>
data
</tr>
</table>
<p>Paragraph data</p>
I know how to get the data inside a div tag but to get the data between two div tags is problematic.
|
[
"You can use .next_sibling to iteratively extract text from the startID tag until you find the endID tag.\nstartID = soup.find(id=\"startID\")\nendID = soup.find(id=\"endID\")\ndata = []\nfor sibling in startID.next_siblings:\n if sibling == endID:\n break\n text = sibling.get_text(strip=True)\n if text:\n data.append(text)\n\noutput:\n> data\n\n['data', 'Paragraph data']\n\n"
] |
[
1
] |
[] |
[] |
[
"beautifulsoup",
"html",
"python",
"web_scraping"
] |
stackoverflow_0074580668_beautifulsoup_html_python_web_scraping.txt
|
Q:
Python print class member name from value
I have a class with list of class members (variabls), each assigned to its own value.
class PacketType:
HEARTBEAT = 0xF0
DEBUG = 0xFC
ECHO = 0xFF
@staticmethod
def get_name(value):
# Get variable name from value
# Print the variable in string format
return ???
If I call PacketType.get_name(0xF0), I'd like to get return as "HEARTBEAT".
Does python allow this, or is the only way to make list of if-elif for each possible value?
A:
The below works. (But I dont understand why you want to have such thing)
class PacketType:
HEARTBEAT = 0xF0
DEBUG = 0xFC
ECHO = 0xFF
@staticmethod
def get_name(value):
for k, v in PacketType.__dict__.items():
if v == value:
return k
return None
print(PacketType.get_name(0xFF))
output
ECHO
A:
Why not use dictionary to store packet types?
class PacketType:
packets = {
0xF0: 'HEARTHBEAT',
0XFC: 'DEBUG',
0XFF: 'ECHO'
}
@classmethod
def get_name(cls, value):
return cls.packets[value]
|
Python print class member name from value
|
I have a class with list of class members (variabls), each assigned to its own value.
class PacketType:
HEARTBEAT = 0xF0
DEBUG = 0xFC
ECHO = 0xFF
@staticmethod
def get_name(value):
# Get variable name from value
# Print the variable in string format
return ???
If I call PacketType.get_name(0xF0), I'd like to get return as "HEARTBEAT".
Does python allow this, or is the only way to make list of if-elif for each possible value?
|
[
"The below works. (But I dont understand why you want to have such thing)\nclass PacketType:\n HEARTBEAT = 0xF0\n DEBUG = 0xFC\n ECHO = 0xFF\n\n @staticmethod\n def get_name(value):\n for k, v in PacketType.__dict__.items():\n if v == value:\n return k\n return None\n\n\nprint(PacketType.get_name(0xFF))\n\noutput\nECHO\n\n",
"Why not use dictionary to store packet types?\nclass PacketType:\n packets = {\n 0xF0: 'HEARTHBEAT',\n 0XFC: 'DEBUG',\n 0XFF: 'ECHO'\n }\n\n @classmethod\n def get_name(cls, value):\n return cls.packets[value]\n\n"
] |
[
3,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074580745_python.txt
|
Q:
How to add input how many integers do you want?
I'm starting next month full stack developer and im doing some practicing
i started with Python and i want to make some code
with while loop that will ask the user to input how many integers they want
and i want to calculate all the numbers
im doing something wrong not sure what
thanks in advance
oz
example:
number = int(input('Enter how many integer: '))
my_list = [number]
while len(my_list) < number:
user_input = int(input('Enter a integer: '))
my_list.append(user_input)
print(user_input+number)
print(my_list)
A:
number = int(input('Enter how many integer: '))
my_list = []
while len(my_list) < number:
user_input = int(input('Enter a integer: '))
my_list.append(user_input)
print(user_input, ' ' ,number)
print(my_list)
|
How to add input how many integers do you want?
|
I'm starting next month full stack developer and im doing some practicing
i started with Python and i want to make some code
with while loop that will ask the user to input how many integers they want
and i want to calculate all the numbers
im doing something wrong not sure what
thanks in advance
oz
example:
number = int(input('Enter how many integer: '))
my_list = [number]
while len(my_list) < number:
user_input = int(input('Enter a integer: '))
my_list.append(user_input)
print(user_input+number)
print(my_list)
|
[
"number = int(input('Enter how many integer: '))\nmy_list = []\nwhile len(my_list) < number:\n user_input = int(input('Enter a integer: '))\n my_list.append(user_input)\n print(user_input, ' ' ,number) \nprint(my_list)\n\n"
] |
[
0
] |
[] |
[] |
[
"input",
"integer",
"python",
"while_loop"
] |
stackoverflow_0074580482_input_integer_python_while_loop.txt
|
Q:
Is there an efficient way to calculate when a record was replaced by another?
I am going to use a soccer analogy to illustrate the problem. I have a table representing players in a soccer game.
player | position | start minute
------------------------------
Bob | keeper | 0
Pedro | Center Midfielder | 0
Joe | Striker | 0
Tim | Center Midfielder | 20
I want to add a column "end minute" for when they were substituted. In the table above, Pedro was substituted out of the "Center Midfielder" position at minute 20 by Tim. You know this because Tim started at the position after Pedro. If nobody replaces them then they play until the end and the "end minute" = 90. The difference between "start minute" and "end minute" is the "play duration" for each player.
I hope this is clear. I am unable to find a clean way to do this in pandas. In the above example there was only one substitution so you can "brute force" the problem. In principle, I need code that works for an unlimited number of substitutions and this is where I get stuck.
A:
One approach could be as follows:
Data
import pandas as pd
# adding some subs to get a more informative example
data = {'player': {0: 'Bob', 1: 'Pedro', 2: 'Joe', 3: 'Tim', 4: 'Keith',
5: 'Leo'},
'position': {0: 'keeper', 1: 'Center Midfielder', 2: 'Striker',
3: 'Center Midfielder', 4: 'Center Midfielder',
5: 'Striker'},
'start minute': {0: 0, 1: 0, 2: 0, 3: 20, 4: 85, 5: 70}}
df = pd.DataFrame(data)
player position start minute
0 Bob keeper 0 # 90 mins, no sub
1 Pedro Center Midfielder 0 # 20 mins, repl by Tim
2 Joe Striker 0 # 70 mins, repl by Leo
3 Tim Center Midfielder 20 # 65 mins, repl by Keith
4 Keith Center Midfielder 85 # 5 mins, no sub
5 Leo Striker 70 # 20 mins, no sub
Code
df['end minute'] = df.groupby('position').shift(-1)['start minute'].fillna(90)
df['play duration'] = df['end minute'].sub(df['start minute'])
print(df)
player position start minute end minute play duration
0 Bob keeper 0 90.0 90.0
1 Pedro Center Midfielder 0 20.0 20.0
2 Joe Striker 0 70.0 70.0
3 Tim Center Midfielder 20 85.0 65.0
4 Keith Center Midfielder 85 90.0 5.0
5 Leo Striker 70 90.0 20.0
Explanation
Use df.groupby on column position and shift by -1 periods.
Remaining NaN values will be for players who were still on the pitch at the end of the game, so let's chain Series.fillna with value 90.
Finally, add a column play duration, with Series.sub applied to the end and start column.
N.B. The above assumes that the players are listed in chronological order per position. If you're not sure about this, first use:
df.sort_values(by=['position','start minute'], inplace=True)
and then at the end use the following to get the original index back:
df.sort_index(inplace=True)
|
Is there an efficient way to calculate when a record was replaced by another?
|
I am going to use a soccer analogy to illustrate the problem. I have a table representing players in a soccer game.
player | position | start minute
------------------------------
Bob | keeper | 0
Pedro | Center Midfielder | 0
Joe | Striker | 0
Tim | Center Midfielder | 20
I want to add a column "end minute" for when they were substituted. In the table above, Pedro was substituted out of the "Center Midfielder" position at minute 20 by Tim. You know this because Tim started at the position after Pedro. If nobody replaces them then they play until the end and the "end minute" = 90. The difference between "start minute" and "end minute" is the "play duration" for each player.
I hope this is clear. I am unable to find a clean way to do this in pandas. In the above example there was only one substitution so you can "brute force" the problem. In principle, I need code that works for an unlimited number of substitutions and this is where I get stuck.
|
[
"One approach could be as follows:\nData\nimport pandas as pd\n\n# adding some subs to get a more informative example\ndata = {'player': {0: 'Bob', 1: 'Pedro', 2: 'Joe', 3: 'Tim', 4: 'Keith',\n 5: 'Leo'}, \n 'position': {0: 'keeper', 1: 'Center Midfielder', 2: 'Striker', \n 3: 'Center Midfielder', 4: 'Center Midfielder',\n 5: 'Striker'}, \n 'start minute': {0: 0, 1: 0, 2: 0, 3: 20, 4: 85, 5: 70}}\ndf = pd.DataFrame(data)\n\n player position start minute\n0 Bob keeper 0 # 90 mins, no sub\n1 Pedro Center Midfielder 0 # 20 mins, repl by Tim\n2 Joe Striker 0 # 70 mins, repl by Leo\n3 Tim Center Midfielder 20 # 65 mins, repl by Keith\n4 Keith Center Midfielder 85 # 5 mins, no sub\n5 Leo Striker 70 # 20 mins, no sub\n\nCode\ndf['end minute'] = df.groupby('position').shift(-1)['start minute'].fillna(90)\ndf['play duration'] = df['end minute'].sub(df['start minute'])\n\nprint(df)\n\n player position start minute end minute play duration\n0 Bob keeper 0 90.0 90.0\n1 Pedro Center Midfielder 0 20.0 20.0\n2 Joe Striker 0 70.0 70.0\n3 Tim Center Midfielder 20 85.0 65.0\n4 Keith Center Midfielder 85 90.0 5.0\n5 Leo Striker 70 90.0 20.0\n\nExplanation\n\nUse df.groupby on column position and shift by -1 periods.\nRemaining NaN values will be for players who were still on the pitch at the end of the game, so let's chain Series.fillna with value 90.\nFinally, add a column play duration, with Series.sub applied to the end and start column.\n\nN.B. The above assumes that the players are listed in chronological order per position. If you're not sure about this, first use:\ndf.sort_values(by=['position','start minute'], inplace=True)\n\nand then at the end use the following to get the original index back:\ndf.sort_index(inplace=True)\n\n"
] |
[
1
] |
[] |
[] |
[
"pandas",
"python",
"sql_order_by"
] |
stackoverflow_0074580331_pandas_python_sql_order_by.txt
|
Q:
The transaction declared chain ID 5777, but the connected node is on 1337
I am trying to deploy my SimpleStorage.sol contract to a ganache local chain by making a transaction using python. It seems to have trouble connecting to the chain.
from solcx import compile_standard
from web3 import Web3
import json
import os
from dotenv import load_dotenv
load_dotenv()
with open("./SimpleStorage.sol", "r") as file:
simple_storage_file = file.read()
compiled_sol = compile_standard(
{
"language": "Solidity",
"sources": {"SimpleStorage.sol": {"content": simple_storage_file}},
"settings": {
"outputSelection": {
"*": {"*": ["abi", "metadata", "evm.bytecode", "evm.sourceMap"]}
}
},
},
solc_version="0.6.0",
)
with open("compiled_code.json", "w") as file:
json.dump(compiled_sol, file)
# get bytecode
bytecode = compiled_sol["contracts"]["SimpleStorage.sol"]["SimpleStorage"]["evm"][
"bytecode"
]["object"]
# get ABI
abi = compiled_sol["contracts"]["SimpleStorage.sol"]["SimpleStorage"]["abi"]
# to connect to ganache blockchain
w3 = Web3(Web3.HTTPProvider("HTTP://127.0.0.1:7545"))
chain_id = 5777
my_address = "0xca1EA31e644F13E3E36631382686fD471c62267A"
private_key = os.getenv("PRIVATE_KEY")
# create the contract in python
SimpleStorage = w3.eth.contract(abi=abi, bytecode=bytecode)
# get the latest transaction
nonce = w3.eth.getTransactionCount(my_address)
# 1. Build a transaction
# 2. Sign a transaction
# 3. Send a transaction
transaction = SimpleStorage.constructor().buildTransaction(
{"chainId": chain_id, "from": my_address, "nonce": nonce}
)
print(transaction)
It seems to be connected to the ganache chain because it prints the nonce, but when I build and try to print the transaction
here is the entire traceback call I am receiving
Traceback (most recent call last):
File "C:\Users\evens\demos\web3_py_simple_storage\deploy.py", line
52, in <module>
transaction = SimpleStorage.constructor().buildTransaction(
File "C:\Python310\lib\site-packages\eth_utils\decorators.py", line
18, in _wrapper
return self.method(obj, *args, **kwargs)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\contract.py", line 684, in buildTransaction
return fill_transaction_defaults(self.web3, built_transaction)
File "cytoolz/functoolz.pyx", line 250, in
cytoolz.functoolz.curry.__call__
return self.func(*args, **kwargs)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\_utils\transactions.py", line 114, in
fill_transaction_defaults
default_val = default_getter(web3, transaction)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\_utils\transactions.py", line 60, in <lambda>
'gas': lambda web3, tx: web3.eth.estimate_gas(tx),
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\eth.py", line 820, in estimate_gas
return self._estimate_gas(transaction, block_identifier)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\module.py", line 57, in caller
result = w3.manager.request_blocking(method_str,
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\manager.py", line 197, in request_blocking
response = self._make_request(method, params)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\manager.py", line 150, in _make_request
return request_func(method, params)
File "cytoolz/functoolz.pyx", line 250, in
cytoolz.functoolz.curry.__call__
return self.func(*args, **kwargs)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\middleware\formatting.py", line 76, in
apply_formatters
response = make_request(method, params)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\middleware\gas_price_strategy.py", line 90, in
middleware
return make_request(method, params)
File "cytoolz/functoolz.pyx", line 250, in
cytoolz.functoolz.curry.__call__
return self.func(*args, **kwargs)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\middleware\formatting.py", line 74, in
apply_formatters
response = make_request(method, formatted_params)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\middleware\attrdict.py", line 33, in middleware
response = make_request(method, params)
File "cytoolz/functoolz.pyx", line 250, in
cytoolz.functoolz.curry.__call__
return self.func(*args, **kwargs)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\middleware\formatting.py", line 74, in
apply_formatters
response = make_request(method, formatted_params)
File "cytoolz/functoolz.pyx", line 250, in
cytoolz.functoolz.curry.__call__
return self.func(*args, **kwargs)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\middleware\formatting.py", line 73, in
apply_formatters
formatted_params = formatter(params)
File "cytoolz/functoolz.pyx", line 503, in
cytoolz.functoolz.Compose.__call__
ret = PyObject_Call(self.first, args, kwargs)
File "cytoolz/functoolz.pyx", line 250, in
cytoolz.functoolz.curry.__call__
return self.func(*args, **kwargs)
File "C:\Python310\lib\site-packages\eth_utils\decorators.py", line
91, in wrapper
return ReturnType(result) # type: ignore
File "C:\Python310\lib\site-packages\eth_utils\applicators.py", line
22, in apply_formatter_at_index
yield formatter(item)
File "cytoolz/functoolz.pyx", line 250, in
cytoolz.functoolz.curry.__call__
File "cytoolz/functoolz.pyx", line 250, in
cytoolz.functoolz.curry.__call__
return self.func(*args, **kwargs)
File "C:\Python310\lib\site-packages\eth_utils\applicators.py", line
72, in apply_formatter_if
return formatter(value)
File "cytoolz/functoolz.pyx", line 250, in
cytoolz.functoolz.curry.__call__
return self.func(*args, **kwargs)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\middleware\validation.py", line 57, in
validate_chain_id
raise ValidationError(
web3.exceptions.ValidationError: The transaction declared chain ID
5777, but the connected node is on 1337
A:
Had this issue myself, apparently it's some sort of Ganache CLI error but the simplest fix I could find was to change the network id in Ganache through settings>server to 1337. It restarts the session so you'd then need to change the address and private key variable.
If it's the same tutorial I'm doing, you're likely to come unstuck after this... the code for transaction should be:
transaction =
SimpleStorage.constructor().buildTransaction( {
"gasPrice": w3.eth.gas_price,
"chainId": chain_id,
"from": my_address,
"nonce": nonce,
})
print(transaction)
Otherwise you get a value error if you don't set the gasPrice
A:
this line of code is wrong
chain_id = 5777
Ganache chain id is not 5777. This is network id. Network id is used by nodes to transfer data between nodes that are on the same network. Network id is not included in blocks and it is not used for signing transactions or mining blocks.
chain_id = 1377
Chain ID is not included in blocks either, but it is used during the transaction signing and verification process.
A:
It works for me, I get value chain_id from w3.eth.chain_id
transaction = SimpleStorage.constructor().buildTransaction(
{
"gasPrice": w3.eth.gas_price,
"chainId": w3.eth.chain_id,
"from": my_address,
"nonce": nonce,
}
)
Delegates to eth_chainId RPC Method
Returns an integer value for the currently configured “Chain Id” value introduced in EIP-155. Returns None if no Chain Id is available.
A:
Hey it happened to me too, you need to build the constructor like this
SimpleStorage.constructor().buildTransaction( {
"gasPrice": w3.eth.gas_price,
"chainId": chain_id,
"from": my_address,
"nonce": nonce,
You need to add the gas price part.
Worked for me
A:
I encountered this issue today and after debugging, I am certain that the issue is from the dotenv config variables imported. All the variables are been seen by python as strings, and for some reason, the web3 library swaps the value 5777 if the type of your chain id is not an integer.
The perfect fix is to cast your chain_id into type int before deploying your smart contract.
Hope this saves y'all some pondering time, peace out!
Snippet example below:
transaction = SimpleStorage.constructor().buildTransaction(
{
"gasPrice": w3.eth.gas_price,
"chainId": int(chain_id),
"from": my_address,
"nonce": nonce,
}
)
A:
Perhaps, you are following FreeCodeCamp tutorial.
I have got the same problem
In my case, it is working after adding gas price, "gasPrice": w3.eth.gas_price
transaction = SimpleStorage.constructor().buildTransaction({ "gasPrice": w3.eth.gas_price, "chainId" : chain_id, "from": my_address, "nonce": nonce})
print(transaction)
A:
You can check your chainId using the following code
const chainId = await wallet.getChainId();
console.log("chain Id",chainId);
For me it was 1337
|
The transaction declared chain ID 5777, but the connected node is on 1337
|
I am trying to deploy my SimpleStorage.sol contract to a ganache local chain by making a transaction using python. It seems to have trouble connecting to the chain.
from solcx import compile_standard
from web3 import Web3
import json
import os
from dotenv import load_dotenv
load_dotenv()
with open("./SimpleStorage.sol", "r") as file:
simple_storage_file = file.read()
compiled_sol = compile_standard(
{
"language": "Solidity",
"sources": {"SimpleStorage.sol": {"content": simple_storage_file}},
"settings": {
"outputSelection": {
"*": {"*": ["abi", "metadata", "evm.bytecode", "evm.sourceMap"]}
}
},
},
solc_version="0.6.0",
)
with open("compiled_code.json", "w") as file:
json.dump(compiled_sol, file)
# get bytecode
bytecode = compiled_sol["contracts"]["SimpleStorage.sol"]["SimpleStorage"]["evm"][
"bytecode"
]["object"]
# get ABI
abi = compiled_sol["contracts"]["SimpleStorage.sol"]["SimpleStorage"]["abi"]
# to connect to ganache blockchain
w3 = Web3(Web3.HTTPProvider("HTTP://127.0.0.1:7545"))
chain_id = 5777
my_address = "0xca1EA31e644F13E3E36631382686fD471c62267A"
private_key = os.getenv("PRIVATE_KEY")
# create the contract in python
SimpleStorage = w3.eth.contract(abi=abi, bytecode=bytecode)
# get the latest transaction
nonce = w3.eth.getTransactionCount(my_address)
# 1. Build a transaction
# 2. Sign a transaction
# 3. Send a transaction
transaction = SimpleStorage.constructor().buildTransaction(
{"chainId": chain_id, "from": my_address, "nonce": nonce}
)
print(transaction)
It seems to be connected to the ganache chain because it prints the nonce, but when I build and try to print the transaction
here is the entire traceback call I am receiving
Traceback (most recent call last):
File "C:\Users\evens\demos\web3_py_simple_storage\deploy.py", line
52, in <module>
transaction = SimpleStorage.constructor().buildTransaction(
File "C:\Python310\lib\site-packages\eth_utils\decorators.py", line
18, in _wrapper
return self.method(obj, *args, **kwargs)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\contract.py", line 684, in buildTransaction
return fill_transaction_defaults(self.web3, built_transaction)
File "cytoolz/functoolz.pyx", line 250, in
cytoolz.functoolz.curry.__call__
return self.func(*args, **kwargs)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\_utils\transactions.py", line 114, in
fill_transaction_defaults
default_val = default_getter(web3, transaction)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\_utils\transactions.py", line 60, in <lambda>
'gas': lambda web3, tx: web3.eth.estimate_gas(tx),
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\eth.py", line 820, in estimate_gas
return self._estimate_gas(transaction, block_identifier)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\module.py", line 57, in caller
result = w3.manager.request_blocking(method_str,
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\manager.py", line 197, in request_blocking
response = self._make_request(method, params)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\manager.py", line 150, in _make_request
return request_func(method, params)
File "cytoolz/functoolz.pyx", line 250, in
cytoolz.functoolz.curry.__call__
return self.func(*args, **kwargs)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\middleware\formatting.py", line 76, in
apply_formatters
response = make_request(method, params)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\middleware\gas_price_strategy.py", line 90, in
middleware
return make_request(method, params)
File "cytoolz/functoolz.pyx", line 250, in
cytoolz.functoolz.curry.__call__
return self.func(*args, **kwargs)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\middleware\formatting.py", line 74, in
apply_formatters
response = make_request(method, formatted_params)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\middleware\attrdict.py", line 33, in middleware
response = make_request(method, params)
File "cytoolz/functoolz.pyx", line 250, in
cytoolz.functoolz.curry.__call__
return self.func(*args, **kwargs)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\middleware\formatting.py", line 74, in
apply_formatters
response = make_request(method, formatted_params)
File "cytoolz/functoolz.pyx", line 250, in
cytoolz.functoolz.curry.__call__
return self.func(*args, **kwargs)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\middleware\formatting.py", line 73, in
apply_formatters
formatted_params = formatter(params)
File "cytoolz/functoolz.pyx", line 503, in
cytoolz.functoolz.Compose.__call__
ret = PyObject_Call(self.first, args, kwargs)
File "cytoolz/functoolz.pyx", line 250, in
cytoolz.functoolz.curry.__call__
return self.func(*args, **kwargs)
File "C:\Python310\lib\site-packages\eth_utils\decorators.py", line
91, in wrapper
return ReturnType(result) # type: ignore
File "C:\Python310\lib\site-packages\eth_utils\applicators.py", line
22, in apply_formatter_at_index
yield formatter(item)
File "cytoolz/functoolz.pyx", line 250, in
cytoolz.functoolz.curry.__call__
File "cytoolz/functoolz.pyx", line 250, in
cytoolz.functoolz.curry.__call__
return self.func(*args, **kwargs)
File "C:\Python310\lib\site-packages\eth_utils\applicators.py", line
72, in apply_formatter_if
return formatter(value)
File "cytoolz/functoolz.pyx", line 250, in
cytoolz.functoolz.curry.__call__
return self.func(*args, **kwargs)
File "C:\Users\evens\AppData\Roaming\Python\Python310\site-
packages\web3\middleware\validation.py", line 57, in
validate_chain_id
raise ValidationError(
web3.exceptions.ValidationError: The transaction declared chain ID
5777, but the connected node is on 1337
|
[
"Had this issue myself, apparently it's some sort of Ganache CLI error but the simplest fix I could find was to change the network id in Ganache through settings>server to 1337. It restarts the session so you'd then need to change the address and private key variable.\nIf it's the same tutorial I'm doing, you're likely to come unstuck after this... the code for transaction should be:\ntransaction = \n SimpleStorage.constructor().buildTransaction( {\n \"gasPrice\": w3.eth.gas_price, \n \"chainId\": chain_id, \n \"from\": my_address, \n \"nonce\": nonce, \n})\nprint(transaction)\n\nOtherwise you get a value error if you don't set the gasPrice\n",
"this line of code is wrong\nchain_id = 5777\n\nGanache chain id is not 5777. This is network id. Network id is used by nodes to transfer data between nodes that are on the same network. Network id is not included in blocks and it is not used for signing transactions or mining blocks.\n chain_id = 1377\n\nChain ID is not included in blocks either, but it is used during the transaction signing and verification process.\n",
"It works for me, I get value chain_id from w3.eth.chain_id\ntransaction = SimpleStorage.constructor().buildTransaction(\n{\n \"gasPrice\": w3.eth.gas_price,\n \"chainId\": w3.eth.chain_id,\n \"from\": my_address,\n \"nonce\": nonce,\n}\n\n)\nDelegates to eth_chainId RPC Method\nReturns an integer value for the currently configured “Chain Id” value introduced in EIP-155. Returns None if no Chain Id is available.\n",
"Hey it happened to me too, you need to build the constructor like this\n SimpleStorage.constructor().buildTransaction( {\n \"gasPrice\": w3.eth.gas_price, \n \"chainId\": chain_id, \n \"from\": my_address, \n \"nonce\": nonce, \n\nYou need to add the gas price part.\nWorked for me\n",
"I encountered this issue today and after debugging, I am certain that the issue is from the dotenv config variables imported. All the variables are been seen by python as strings, and for some reason, the web3 library swaps the value 5777 if the type of your chain id is not an integer.\nThe perfect fix is to cast your chain_id into type int before deploying your smart contract.\nHope this saves y'all some pondering time, peace out!\nSnippet example below:\ntransaction = SimpleStorage.constructor().buildTransaction(\n{\n \"gasPrice\": w3.eth.gas_price,\n \"chainId\": int(chain_id),\n \"from\": my_address,\n \"nonce\": nonce,\n}\n\n)\n",
"Perhaps, you are following FreeCodeCamp tutorial.\nI have got the same problem\nIn my case, it is working after adding gas price, \"gasPrice\": w3.eth.gas_price\ntransaction = SimpleStorage.constructor().buildTransaction({ \"gasPrice\": w3.eth.gas_price, \"chainId\" : chain_id, \"from\": my_address, \"nonce\": nonce})\nprint(transaction)\n\n",
"You can check your chainId using the following code\nconst chainId = await wallet.getChainId();\nconsole.log(\"chain Id\",chainId);\n\nFor me it was 1337\n"
] |
[
33,
11,
7,
2,
1,
0,
0
] |
[] |
[] |
[
"ethereum",
"ganache",
"python",
"smartcontracts",
"solidity"
] |
stackoverflow_0070731492_ethereum_ganache_python_smartcontracts_solidity.txt
|
Q:
Pandas and bs4 html scraping
I am extracting data from an html file, it is in a table format so I made this line of code to convert all the tables to a data frame with pandas.
dfs = pd.read_html("synced_contacts.html")
Now, printing the 2nd row of tables of the data frame
dfs[1]
The output is the following:
How can I do so that the information is not duplicated in two columns as shown in the image, and also separate "First NameDaniela" in "First Name" as first column and "Daniela" as value
Expected Output:
Table HTML structure:
<title>Synced contacts</title></head><body class="_5vb_ _2yq _a7o5"><div class="clearfix _ikh"><div class="_4bl9"><div class="_li"><div><table style="width:100%;background:white;position:fixed;z-index:99;"><tr style=""><td height="8" style="line-height:8px;"> </td></tr><tr style="background:white"><td style="text-align:left;height:28px;width:35px;"></td><td style="text-align:left;height:28px;"><img src="files/Instagram-Logo.png" height="28" alt="Instagram" /></td></tr><tr style=""><td height="5" style="line-height:5px;"> </td></tr></table><div style="width:100%;height:44px;"></div></div><div class="_a705"><div class="_3-8y _3-95 _a70a"><div class="_a70d"><div class="_a70e">Synced contacts</div><div class="_a70f">Contacts you've synced</div></div></div><div class="_a706" role="main"><div class="pam _3-95 _2ph- _a6-g uiBoxWhite noborder"><div class="_a6-p"><table style="table-layout: fixed;"><tr><td colspan="2" class="_2pin _a6_q">First Name<div><div>Daniela</div></div></td></tr><tr><td colspan="2" class="_2pin _a6_q">Last Name<div><div>Guevara</div></div></td></tr><tr><td colspan="2" class="_2pin _a6_q">Contact Information<div><div>3017004914</div></div></td></tr></table></div><div class="_3-94 _a6-o"></div></div><div class="pam _3-95 _2ph- _a6-g uiBoxWhite noborder"><div class="_a6-p"><table style="table-layout: fixed;"><tr><td colspan="2" class="_2pin _a6_q">First Name<div><div>Marianna</div></div></td></tr><tr><td colspan="2" class="_2pin _a6_q">Contact Information<div><div>3125761972</div></div></td></tr></table></div><div class="_3-94 _a6-o"></div></div><div class="pam _3-95 _2ph- _a6-g uiBoxWhite noborder"><div class="_a6-p"><table style="table-layout: fixed;"><tr><td colspan="2" class="_2pin _a6_q">First Name<div><div>Ana Maria</div></div></td></tr><tr><td colspan="2" class="_2pin _a6_q">Last Name<div><div>Garzon</div></div></td></tr><tr><td colspan="2" class="_2pin _a6_q">Contact Information<div><div>3214948507</div></div></td></tr></table></div>
A:
It is caused by the struture, everything is placed in a single <td> and will be concatenated, the colspan is creating the second column.
pd.read_html() is a good for the first and easiest pass, not necessarily that it will handle every messy table in real life.
So instead using the pd.read_html() you could use BeautifulSoup directly to fit the behavior of how to scrape to your needs and create a dataframe from the result. .stripped_strings is used here to split the texts of each element in the <tr> to a list.
pd.DataFrame(
[
dict([list(row.stripped_strings)for row in t.select('tr')])
for t in soup.select('table:has(._2pin)')
]
)
Example
from bs4 import BeautifulSoup
html='''
<title>Synced contacts</title></head><body class="_5vb_ _2yq _a7o5"><div class="clearfix _ikh"><div class="_4bl9"><div class="_li"><div><table style="width:100%;background:white;position:fixed;z-index:99;"><tr style=""><td height="8" style="line-height:8px;"> </td></tr><tr style="background:white"><td style="text-align:left;height:28px;width:35px;"></td><td style="text-align:left;height:28px;"><img src="files/Instagram-Logo.png" height="28" alt="Instagram" /></td></tr><tr style=""><td height="5" style="line-height:5px;"> </td></tr></table><div style="width:100%;height:44px;"></div></div><div class="_a705"><div class="_3-8y _3-95 _a70a"><div class="_a70d"><div class="_a70e">Synced contacts</div><div class="_a70f">Contacts you've synced</div></div></div><div class="_a706" role="main"><div class="pam _3-95 _2ph- _a6-g uiBoxWhite noborder"><div class="_a6-p"><table style="table-layout: fixed;"><tr><td colspan="2" class="_2pin _a6_q">First Name<div><div>Daniela</div></div></td></tr><tr><td colspan="2" class="_2pin _a6_q">Last Name<div><div>Guevara</div></div></td></tr><tr><td colspan="2" class="_2pin _a6_q">Contact Information<div><div>3017004914</div></div></td></tr></table></div><div class="_3-94 _a6-o"></div></div><div class="pam _3-95 _2ph- _a6-g uiBoxWhite noborder"><div class="_a6-p"><table style="table-layout: fixed;"><tr><td colspan="2" class="_2pin _a6_q">First Name<div><div>Marianna</div></div></td></tr><tr><td colspan="2" class="_2pin _a6_q">Contact Information<div><div>3125761972</div></div></td></tr></table></div><div class="_3-94 _a6-o"></div></div><div class="pam _3-95 _2ph- _a6-g uiBoxWhite noborder"><div class="_a6-p"><table style="table-layout: fixed;"><tr><td colspan="2" class="_2pin _a6_q">First Name<div><div>Ana Maria</div></div></td></tr><tr><td colspan="2" class="_2pin _a6_q">Last Name<div><div>Garzon</div></div></td></tr><tr><td colspan="2" class="_2pin _a6_q">Contact Information<div><div>3214948507</div></div></td></tr></table></div>
'''
soup = BeautifulSoup(html)
pd.DataFrame(
[
dict([list(row.stripped_strings)for row in t.select('tr')])
for t in soup.select('table:has(._2pin)')
]
)
Output
First Name
Last Name
Contact Information
0
Daniela
Guevara
3017004914
1
Marianna
nan
3125761972
2
Ana Maria
Garzon
3214948507
|
Pandas and bs4 html scraping
|
I am extracting data from an html file, it is in a table format so I made this line of code to convert all the tables to a data frame with pandas.
dfs = pd.read_html("synced_contacts.html")
Now, printing the 2nd row of tables of the data frame
dfs[1]
The output is the following:
How can I do so that the information is not duplicated in two columns as shown in the image, and also separate "First NameDaniela" in "First Name" as first column and "Daniela" as value
Expected Output:
Table HTML structure:
<title>Synced contacts</title></head><body class="_5vb_ _2yq _a7o5"><div class="clearfix _ikh"><div class="_4bl9"><div class="_li"><div><table style="width:100%;background:white;position:fixed;z-index:99;"><tr style=""><td height="8" style="line-height:8px;"> </td></tr><tr style="background:white"><td style="text-align:left;height:28px;width:35px;"></td><td style="text-align:left;height:28px;"><img src="files/Instagram-Logo.png" height="28" alt="Instagram" /></td></tr><tr style=""><td height="5" style="line-height:5px;"> </td></tr></table><div style="width:100%;height:44px;"></div></div><div class="_a705"><div class="_3-8y _3-95 _a70a"><div class="_a70d"><div class="_a70e">Synced contacts</div><div class="_a70f">Contacts you've synced</div></div></div><div class="_a706" role="main"><div class="pam _3-95 _2ph- _a6-g uiBoxWhite noborder"><div class="_a6-p"><table style="table-layout: fixed;"><tr><td colspan="2" class="_2pin _a6_q">First Name<div><div>Daniela</div></div></td></tr><tr><td colspan="2" class="_2pin _a6_q">Last Name<div><div>Guevara</div></div></td></tr><tr><td colspan="2" class="_2pin _a6_q">Contact Information<div><div>3017004914</div></div></td></tr></table></div><div class="_3-94 _a6-o"></div></div><div class="pam _3-95 _2ph- _a6-g uiBoxWhite noborder"><div class="_a6-p"><table style="table-layout: fixed;"><tr><td colspan="2" class="_2pin _a6_q">First Name<div><div>Marianna</div></div></td></tr><tr><td colspan="2" class="_2pin _a6_q">Contact Information<div><div>3125761972</div></div></td></tr></table></div><div class="_3-94 _a6-o"></div></div><div class="pam _3-95 _2ph- _a6-g uiBoxWhite noborder"><div class="_a6-p"><table style="table-layout: fixed;"><tr><td colspan="2" class="_2pin _a6_q">First Name<div><div>Ana Maria</div></div></td></tr><tr><td colspan="2" class="_2pin _a6_q">Last Name<div><div>Garzon</div></div></td></tr><tr><td colspan="2" class="_2pin _a6_q">Contact Information<div><div>3214948507</div></div></td></tr></table></div>
|
[
"It is caused by the struture, everything is placed in a single <td> and will be concatenated, the colspan is creating the second column.\npd.read_html() is a good for the first and easiest pass, not necessarily that it will handle every messy table in real life.\nSo instead using the pd.read_html() you could use BeautifulSoup directly to fit the behavior of how to scrape to your needs and create a dataframe from the result. .stripped_strings is used here to split the texts of each element in the <tr> to a list.\npd.DataFrame(\n [\n dict([list(row.stripped_strings)for row in t.select('tr')]) \n for t in soup.select('table:has(._2pin)')\n ]\n)\n\nExample\nfrom bs4 import BeautifulSoup\nhtml='''\n<title>Synced contacts</title></head><body class=\"_5vb_ _2yq _a7o5\"><div class=\"clearfix _ikh\"><div class=\"_4bl9\"><div class=\"_li\"><div><table style=\"width:100%;background:white;position:fixed;z-index:99;\"><tr style=\"\"><td height=\"8\" style=\"line-height:8px;\"> </td></tr><tr style=\"background:white\"><td style=\"text-align:left;height:28px;width:35px;\"></td><td style=\"text-align:left;height:28px;\"><img src=\"files/Instagram-Logo.png\" height=\"28\" alt=\"Instagram\" /></td></tr><tr style=\"\"><td height=\"5\" style=\"line-height:5px;\"> </td></tr></table><div style=\"width:100%;height:44px;\"></div></div><div class=\"_a705\"><div class=\"_3-8y _3-95 _a70a\"><div class=\"_a70d\"><div class=\"_a70e\">Synced contacts</div><div class=\"_a70f\">Contacts you've synced</div></div></div><div class=\"_a706\" role=\"main\"><div class=\"pam _3-95 _2ph- _a6-g uiBoxWhite noborder\"><div class=\"_a6-p\"><table style=\"table-layout: fixed;\"><tr><td colspan=\"2\" class=\"_2pin _a6_q\">First Name<div><div>Daniela</div></div></td></tr><tr><td colspan=\"2\" class=\"_2pin _a6_q\">Last Name<div><div>Guevara</div></div></td></tr><tr><td colspan=\"2\" class=\"_2pin _a6_q\">Contact Information<div><div>3017004914</div></div></td></tr></table></div><div class=\"_3-94 _a6-o\"></div></div><div class=\"pam _3-95 _2ph- _a6-g uiBoxWhite noborder\"><div class=\"_a6-p\"><table style=\"table-layout: fixed;\"><tr><td colspan=\"2\" class=\"_2pin _a6_q\">First Name<div><div>Marianna</div></div></td></tr><tr><td colspan=\"2\" class=\"_2pin _a6_q\">Contact Information<div><div>3125761972</div></div></td></tr></table></div><div class=\"_3-94 _a6-o\"></div></div><div class=\"pam _3-95 _2ph- _a6-g uiBoxWhite noborder\"><div class=\"_a6-p\"><table style=\"table-layout: fixed;\"><tr><td colspan=\"2\" class=\"_2pin _a6_q\">First Name<div><div>Ana Maria</div></div></td></tr><tr><td colspan=\"2\" class=\"_2pin _a6_q\">Last Name<div><div>Garzon</div></div></td></tr><tr><td colspan=\"2\" class=\"_2pin _a6_q\">Contact Information<div><div>3214948507</div></div></td></tr></table></div>\n'''\n\nsoup = BeautifulSoup(html)\n\npd.DataFrame(\n [\n dict([list(row.stripped_strings)for row in t.select('tr')]) \n for t in soup.select('table:has(._2pin)')\n ]\n)\n\nOutput\n\n\n\n\n\nFirst Name\nLast Name\nContact Information\n\n\n\n\n0\nDaniela\nGuevara\n3017004914\n\n\n1\nMarianna\nnan\n3125761972\n\n\n2\nAna Maria\nGarzon\n3214948507\n\n\n\n"
] |
[
0
] |
[] |
[] |
[
"beautifulsoup",
"dataframe",
"pandas",
"python",
"web_scraping"
] |
stackoverflow_0074578362_beautifulsoup_dataframe_pandas_python_web_scraping.txt
|
Q:
cannot be loaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at
It was working fine and then i got an error. after solving it i always get this error, whatever the project is
output:
& : File C:\Users\pc\Documents\python\venv\Scripts\Activate.ps1 cannot be loaded because running scripts is
disabled on this system. For more information, see about_Execution_Policies at
https:/go.microsoft.com/fwlink/?LinkID=135170.
At line:1 char:3
+ & c:/Users/pc/Documents/python/venv/Scripts/Activate.ps1
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : SecurityError: (:) [], PSSecurityException
+ FullyQualifiedErrorId : UnauthorizedAccessenter code here
A:
This is because the user your running the script as has a undefined ExecutionPolicy You could fix this by running the following in powershell:
Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy Unrestricted
A:
If you are getting error like this,
We can resolve that using the following steps,
Get the status of current ExecutionPolicy by the command below:
Get-ExecutionPolicy
By default it is Restricted. To allow the execution of PowerShell scripts we need to set this ExecutionPolicy either as Unrestricted or Bypass.
We can set the policy for Current User as Bypass by using any of the below PowerShell commands:
Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy Unrestricted -Force
Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy Bypass -Force
Unrestricted policy loads all configuration files and runs all scripts. If you run an unsigned script that was downloaded from the Internet, you are prompted for permission before it runs.
Whereas in Bypass policy, nothing is blocked and there are no warnings or prompts during script execution. Bypass ExecutionPolicy is more relaxed than Unrestricted.
A:
Might also wanna consider setting it to:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
You'll get a message about Execution Policy change (obviously) to which I said "A" for yes to all. Select what works best for you.
This should allow you to run your own scripts but any originating from anywhere else will require approval.
*above post edited for clarity
A:
Just open Windows Powershell as administrator and execute this command Set-ExecutionPolicy Unrestricted -Force. Issue will be resolved and you can activate it in VS code or CMD.
A:
step 1: -Press the windows-button on your keyboard.
step 2: -Type ‘PowerShell’
step 3: -Right-click Windows PowerShell
step 4: -Click Run as Administrator
step 5: -Run the following command and confirm with ‘Y’
Try this.
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope LocalMachine
A:
Just type this in the powershell
Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy Unrestricted
and it will be enabled
A:
This command work for me:-
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
Run This on your PowerShell I think it's work
A:
step 1: -Press the windows-button on your keyboard.
step 2: -Type ‘PowerShell’
step 3: -Right-click Windows PowerShell
step 4: -Click Run as Administrator
step 5: -Run the following command and confirm with ‘Y’
Try this.
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope LocalMachine
A:
first run:
Get-ExecutionPolicy
then run:
Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy Unrestricted -Force
|
cannot be loaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at
|
It was working fine and then i got an error. after solving it i always get this error, whatever the project is
output:
& : File C:\Users\pc\Documents\python\venv\Scripts\Activate.ps1 cannot be loaded because running scripts is
disabled on this system. For more information, see about_Execution_Policies at
https:/go.microsoft.com/fwlink/?LinkID=135170.
At line:1 char:3
+ & c:/Users/pc/Documents/python/venv/Scripts/Activate.ps1
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : SecurityError: (:) [], PSSecurityException
+ FullyQualifiedErrorId : UnauthorizedAccessenter code here
|
[
"This is because the user your running the script as has a undefined ExecutionPolicy You could fix this by running the following in powershell:\nSet-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy Unrestricted\n\n",
"If you are getting error like this,\n\nWe can resolve that using the following steps,\n\nGet the status of current ExecutionPolicy by the command below:\nGet-ExecutionPolicy\n\nBy default it is Restricted. To allow the execution of PowerShell scripts we need to set this ExecutionPolicy either as Unrestricted or Bypass.\n\nWe can set the policy for Current User as Bypass by using any of the below PowerShell commands:\nSet-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy Unrestricted -Force\nSet-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy Bypass -Force\n\nUnrestricted policy loads all configuration files and runs all scripts. If you run an unsigned script that was downloaded from the Internet, you are prompted for permission before it runs.\nWhereas in Bypass policy, nothing is blocked and there are no warnings or prompts during script execution. Bypass ExecutionPolicy is more relaxed than Unrestricted.\n\n\n",
"Might also wanna consider setting it to:\nSet-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser\n\nYou'll get a message about Execution Policy change (obviously) to which I said \"A\" for yes to all. Select what works best for you.\nThis should allow you to run your own scripts but any originating from anywhere else will require approval.\n*above post edited for clarity\n",
"Just open Windows Powershell as administrator and execute this command Set-ExecutionPolicy Unrestricted -Force. Issue will be resolved and you can activate it in VS code or CMD.\n",
"step 1: -Press the windows-button on your keyboard.\nstep 2: -Type ‘PowerShell’\nstep 3: -Right-click Windows PowerShell\nstep 4: -Click Run as Administrator\nstep 5: -Run the following command and confirm with ‘Y’\nTry this. \n\nSet-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope LocalMachine\n\n",
"Just type this in the powershell\nSet-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy Unrestricted \n\nand it will be enabled\n",
"This command work for me:-\nSet-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser\n\nRun This on your PowerShell I think it's work\n",
"step 1: -Press the windows-button on your keyboard.\nstep 2: -Type ‘PowerShell’\nstep 3: -Right-click Windows PowerShell\nstep 4: -Click Run as Administrator\nstep 5: -Run the following command and confirm with ‘Y’\nTry this.\nSet-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope LocalMachine\n\n",
"first run:\nGet-ExecutionPolicy\n\nthen run:\nSet-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy Unrestricted -Force\n\n"
] |
[
108,
18,
13,
5,
4,
4,
2,
1,
0
] |
[
"workon \"namefolder\"\n.\\venv\\scripts\\activate\n",
"I got a similar error about the execution policies. I used the commands posted above. After that, the error message didn't appear.\nRun the Powershell as administrator and use those commands.\nstep 1: -Press the Windows button on your keyboard.\nstep 2: -Type ‘PowerShell’\nstep 3: -Right-click Windows PowerShell\nstep 4: -Click Run as Administrator\nstep 5: -Run the following command and confirm with ‘Y’ Try this.\n\nSet-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy Unrestricted -Force\n\nSet-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy Bypass -Force\n\nSet-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser\n\nSet-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope LocalMachine\n\nenter image description here\n"
] |
[
-1,
-1
] |
[
"python",
"visual_studio_code"
] |
stackoverflow_0067150436_python_visual_studio_code.txt
|
Q:
Getting real time output from iperf3 using python's subprocess
This is a follow-on to: Getting realtime output using subprocess
I'm trying to use subprocess to capture output from iperf3 in real time (using python 3.6 on windows). The goal is to leave the iperf3 session running continuously and grab the data to update a real time plot.
I created an implementation based on the referenced question (see code at end of post), but the code still waits on the first "readline" call for the iperf3 session to complete.
Output and desired behavior
My code returns the output:
Iperf test
Popen returns after: 0.012966156005859375 seconds
Readline 0 returned after: 3.2275266647338867 seconds, line was: Connecting to host 10.99.99.21, port 5201
Readline 1 returned after: 3.2275266647338867 seconds, line was: [ 4] local 10.99.99.7 port 55563 connected to 10.99.99.21 port 5201
Readline 2 returned after: 3.2275266647338867 seconds, line was: [ ID] Interval Transfer Bandwidth
Readline 3 returned after: 3.2275266647338867 seconds, line was: [ 4] 0.00-0.50 sec 27.4 MBytes 458 Mbits/sec
Readline 4 returned after: 3.2275266647338867 seconds, line was: [ 4] 0.50-1.00 sec 29.0 MBytes 486 Mbits/sec
Exited
The outputs show that the first readline call doesn't return until after 3 seconds, when the iperf session completes. The desired behavior is that the readline calls 0, 1, and 2 return almost immediately, and readline call #3 returns after approx. 0.5 seconds, as soon as iperf3 has completed the first 0.5 second reporting interval.
Code
import subprocess
import time
if __name__ == "__main__":
print('Iperf test')
tref = time.time()
reads_to_capture = 5
times = [0] * reads_to_capture
lines = [''] * reads_to_capture
interval = 0.5
ip = '10.99.99.21' # Iperf server IP address
process = subprocess.Popen(f'iperf3 -c {ip} -f m -i {interval} -t 3', encoding = 'utf-8',
stdout=subprocess.PIPE)
print(f'Popen returns after: {time.time() - tref} seconds')
cnt = 0
while True:
output = process.stdout.readline()
if cnt < reads_to_capture: # To avoid flooding the terminal, only print the first 5
times[cnt] = time.time() - tref
lines[cnt] = output
cnt = cnt + 1
if output == '':
rc = process.poll()
if rc is not None:
break
rc = process.poll()
for ii in range(reads_to_capture):
print(f'Readline {ii} returned after: {times[ii]} seconds, line was: {lines[ii].strip()}')
print('Exited')
A:
Sorry for my late answer. There is an API for Iperf3, luckily this comes with the standard iperf3 build/installation.
This API allows python to take the common output of iperf3.
I let you the official website of the python wrapper for iperf3. It comes with simple examples for your use. Hope I could have gave you an answer.
https://iperf3-python.readthedocs.io/en/latest/index.html
A:
In order to get the real time output from iperf3 to the subprocess.Popen you need the --forceflush flag in the iperf3 command. The --forceflush flag is introduced in iperf 3.1.5, unfortunately the official compiled iperf3.exe file have only until iperf 3.1.3.
Two solution for you,
get the iperf >= 3.1.5 from non official route like: https://files.budman.pw/
use linux's iperf3
Attached with my code:
import subprocess
my_iperf_process = subprocess.Popen(["iperf3","-c","192.168.0.1","--forceflush"],stdout=subprocess.PIPE)
for line in my_iperf_process.stdout:
print(line)
The help message of --forceflush:
--forceflush force flushing output at every interval
|
Getting real time output from iperf3 using python's subprocess
|
This is a follow-on to: Getting realtime output using subprocess
I'm trying to use subprocess to capture output from iperf3 in real time (using python 3.6 on windows). The goal is to leave the iperf3 session running continuously and grab the data to update a real time plot.
I created an implementation based on the referenced question (see code at end of post), but the code still waits on the first "readline" call for the iperf3 session to complete.
Output and desired behavior
My code returns the output:
Iperf test
Popen returns after: 0.012966156005859375 seconds
Readline 0 returned after: 3.2275266647338867 seconds, line was: Connecting to host 10.99.99.21, port 5201
Readline 1 returned after: 3.2275266647338867 seconds, line was: [ 4] local 10.99.99.7 port 55563 connected to 10.99.99.21 port 5201
Readline 2 returned after: 3.2275266647338867 seconds, line was: [ ID] Interval Transfer Bandwidth
Readline 3 returned after: 3.2275266647338867 seconds, line was: [ 4] 0.00-0.50 sec 27.4 MBytes 458 Mbits/sec
Readline 4 returned after: 3.2275266647338867 seconds, line was: [ 4] 0.50-1.00 sec 29.0 MBytes 486 Mbits/sec
Exited
The outputs show that the first readline call doesn't return until after 3 seconds, when the iperf session completes. The desired behavior is that the readline calls 0, 1, and 2 return almost immediately, and readline call #3 returns after approx. 0.5 seconds, as soon as iperf3 has completed the first 0.5 second reporting interval.
Code
import subprocess
import time
if __name__ == "__main__":
print('Iperf test')
tref = time.time()
reads_to_capture = 5
times = [0] * reads_to_capture
lines = [''] * reads_to_capture
interval = 0.5
ip = '10.99.99.21' # Iperf server IP address
process = subprocess.Popen(f'iperf3 -c {ip} -f m -i {interval} -t 3', encoding = 'utf-8',
stdout=subprocess.PIPE)
print(f'Popen returns after: {time.time() - tref} seconds')
cnt = 0
while True:
output = process.stdout.readline()
if cnt < reads_to_capture: # To avoid flooding the terminal, only print the first 5
times[cnt] = time.time() - tref
lines[cnt] = output
cnt = cnt + 1
if output == '':
rc = process.poll()
if rc is not None:
break
rc = process.poll()
for ii in range(reads_to_capture):
print(f'Readline {ii} returned after: {times[ii]} seconds, line was: {lines[ii].strip()}')
print('Exited')
|
[
"Sorry for my late answer. There is an API for Iperf3, luckily this comes with the standard iperf3 build/installation.\nThis API allows python to take the common output of iperf3.\nI let you the official website of the python wrapper for iperf3. It comes with simple examples for your use. Hope I could have gave you an answer.\nhttps://iperf3-python.readthedocs.io/en/latest/index.html\n",
"In order to get the real time output from iperf3 to the subprocess.Popen you need the --forceflush flag in the iperf3 command. The --forceflush flag is introduced in iperf 3.1.5, unfortunately the official compiled iperf3.exe file have only until iperf 3.1.3.\nTwo solution for you,\n\nget the iperf >= 3.1.5 from non official route like: https://files.budman.pw/\nuse linux's iperf3\n\nAttached with my code:\nimport subprocess\n\nmy_iperf_process = subprocess.Popen([\"iperf3\",\"-c\",\"192.168.0.1\",\"--forceflush\"],stdout=subprocess.PIPE)\n\nfor line in my_iperf_process.stdout:\n print(line)\n\n\nThe help message of --forceflush:\n--forceflush force flushing output at every interval\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"iperf",
"python",
"subprocess"
] |
stackoverflow_0061737867_iperf_python_subprocess.txt
|
Q:
Unsatisfiable error installing QIIME in conda environment glibc==2.31=0
I've been trying to install QIIME2 on Linux with the command
conda install -c qiime2 qiime2
and get this error message:
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: \
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort. failed
UnsatisfiableError: The following specifications were found
to be incompatible with the existing python installation in your environment:
Specifications:
- qiime2 -> python[version='>=3.6,<3.7.0a0|>=3.8,<3.9.0a0']
Your python: python=3.9
If python is on the left-most side of the chain, that's the version you've asked for.
When python appears to the right, that indicates that the thing on the left is somehow
not available for the python version you are constrained to. Note that conda will not
change your python version to a different minor version unless you explicitly specify
that.
The following specifications were found to be incompatible with your system:
- feature:/linux-64::__glibc==2.31=0
- feature:|@/linux-64::__glibc==2.31=0
Your installed version is: 2.31`
I even tried installing it another way using:
conda env create -n qiime2 --file qiime2.yml
and get another error:
Collecting package metadata (repodata.json): done
Solving environment: failed
ResolvePackageNotFound:
- libgfortran=4.0.0
- appnope=0.1.0
I'm very new to this so any help on how I could maybe solve this would be greatly appreciated.
Thanks!
A:
./conda install -c anaconda appnope
./conda install -c anaconda libgfortran
|
Unsatisfiable error installing QIIME in conda environment glibc==2.31=0
|
I've been trying to install QIIME2 on Linux with the command
conda install -c qiime2 qiime2
and get this error message:
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: \
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort. failed
UnsatisfiableError: The following specifications were found
to be incompatible with the existing python installation in your environment:
Specifications:
- qiime2 -> python[version='>=3.6,<3.7.0a0|>=3.8,<3.9.0a0']
Your python: python=3.9
If python is on the left-most side of the chain, that's the version you've asked for.
When python appears to the right, that indicates that the thing on the left is somehow
not available for the python version you are constrained to. Note that conda will not
change your python version to a different minor version unless you explicitly specify
that.
The following specifications were found to be incompatible with your system:
- feature:/linux-64::__glibc==2.31=0
- feature:|@/linux-64::__glibc==2.31=0
Your installed version is: 2.31`
I even tried installing it another way using:
conda env create -n qiime2 --file qiime2.yml
and get another error:
Collecting package metadata (repodata.json): done
Solving environment: failed
ResolvePackageNotFound:
- libgfortran=4.0.0
- appnope=0.1.0
I'm very new to this so any help on how I could maybe solve this would be greatly appreciated.
Thanks!
|
[
"./conda install -c anaconda appnope\n./conda install -c anaconda libgfortran\n\n"
] |
[
0
] |
[] |
[] |
[
"anaconda",
"conda",
"linux",
"python",
"qiime"
] |
stackoverflow_0074048596_anaconda_conda_linux_python_qiime.txt
|
Q:
Python: .count() doesn't count
I'm writing a simple program that takes user input and prints the number of even, odd and zeros.
The program doesn't yield any errors but it seems to skip line 5 and 15
I want to count and display the zeroes in the numbers list
numbers = input("Numbers seperated by space:").split()
print("Numbers:" + str(numbers))
zero = numbers.count(0)
even = 0
odd = 0
for i in numbers:
if int(i) % 2 == 0:
even += 1
else:
odd += 1
even = even - zero
print("Even:" + str(even))
print("Odd:" + str(odd))
print("Zero:" + str(zero))
A:
Youre code isnt working because inputs in Python are strings. So when you enter a number like 5, Python turns it into "5". So to make your code work change .count(0) to .count("0")
numbers = input("Numbers seperated by space:").split()
print("Numbers:" + str(numbers))
zero = numbers.count("0")
even = 0
odd = 0
for i in numbers:
if int(i) % 2 == 0:
even += 1
else:
odd += 1
even = even - zero
print("Even:" + str(even))
print("Odd:" + str(odd))
print("Zero:" + str(zero))
Output:
Numbers seperated by space:
5 0 0 2
Numbers:['5', '0', '0', '2']
Even:1
Odd:1
Zero:2
If you are sure that only numbers are the input you could also do
numbers = [int(elem) for elem in input("Numbers seperated by space:").split()]
zero = numbers.count(0)
A:
When counting evens, zeros may get added so I would check for this condition first
numbers = input("Numbers separated by space:").split()
print("Numbers:" + str(numbers))
zero = 0
even = 0
odd = 0
for i in numbers:
if int(i) == 0:
zero += 1
elif int(i) % 2 == 0:
even += 1
else:
odd += 1
# using f-string to format output instead
print(f"Even: {even}")
print(f"Odd: {odd}")
print(f"Zero: {zero}")
A:
numbers = input("Numbers separated by space:").split()
zero = numbers.count("0")
even = 0
odd = 0
for i in numbers:
if int(i) % 2 == 0 and i != '0':
even +=1
elif int(i) %2 !=0 and i != '0':
odd +=1
print("Even:" + str(even))
print("Odd:" + str(odd))
print("Zero:" + str(zero))
|
Python: .count() doesn't count
|
I'm writing a simple program that takes user input and prints the number of even, odd and zeros.
The program doesn't yield any errors but it seems to skip line 5 and 15
I want to count and display the zeroes in the numbers list
numbers = input("Numbers seperated by space:").split()
print("Numbers:" + str(numbers))
zero = numbers.count(0)
even = 0
odd = 0
for i in numbers:
if int(i) % 2 == 0:
even += 1
else:
odd += 1
even = even - zero
print("Even:" + str(even))
print("Odd:" + str(odd))
print("Zero:" + str(zero))
|
[
"Youre code isnt working because inputs in Python are strings. So when you enter a number like 5, Python turns it into \"5\". So to make your code work change .count(0) to .count(\"0\")\nnumbers = input(\"Numbers seperated by space:\").split()\n \nprint(\"Numbers:\" + str(numbers))\n \nzero = numbers.count(\"0\")\neven = 0\nodd = 0\n \nfor i in numbers:\n if int(i) % 2 == 0:\n even += 1\n else:\n odd += 1\n \neven = even - zero\n \nprint(\"Even:\" + str(even))\nprint(\"Odd:\" + str(odd))\nprint(\"Zero:\" + str(zero))\n\nOutput:\nNumbers seperated by space:\n5 0 0 2\nNumbers:['5', '0', '0', '2']\nEven:1\nOdd:1\nZero:2\n\nIf you are sure that only numbers are the input you could also do\nnumbers = [int(elem) for elem in input(\"Numbers seperated by space:\").split()]\nzero = numbers.count(0)\n\n",
"When counting evens, zeros may get added so I would check for this condition first\nnumbers = input(\"Numbers separated by space:\").split()\n \nprint(\"Numbers:\" + str(numbers))\n \nzero = 0\neven = 0\nodd = 0\n \nfor i in numbers:\n if int(i) == 0:\n zero += 1\n elif int(i) % 2 == 0:\n even += 1\n else:\n odd += 1\n\n# using f-string to format output instead\nprint(f\"Even: {even}\")\nprint(f\"Odd: {odd}\")\nprint(f\"Zero: {zero}\")\n\n",
"numbers = input(\"Numbers separated by space:\").split()\n\nzero = numbers.count(\"0\")\neven = 0\nodd = 0\n\nfor i in numbers:\n if int(i) % 2 == 0 and i != '0':\n even +=1\n elif int(i) %2 !=0 and i != '0':\n odd +=1\n\nprint(\"Even:\" + str(even))\nprint(\"Odd:\" + str(odd))\nprint(\"Zero:\" + str(zero))\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074561070_python.txt
|
Q:
How to make a bot to automatically open Zoom meeting and enter class when it's time in the schedule by using Python?
I am making a bot to automatically open Zoom meeting and enter the class when it's time. I set the time when it is from 8:00 PM to 8:10 PM the computer will automatically open Zoom and enter the code and password, but it's not running. I have tried many ways but nothing happen. So hopefully someone can help to fix this.
Thank you very much!
Here is my code
import subprocess
import pyautogui
import time
import pandas as pd
from datetime import datetime
import pyttsx3
import os
#---------------------------------------------------------
# Robot speech
# Jarvis_brain = speak
# Jarvis_mouth = engine
assistant= "Jarvis" # Iron man Fan
Jarvis_mouth = pyttsx3.init()
Jarvis_mouth.setProperty("rate", 140)
voices = Jarvis_mouth.getProperty("voices")
Jarvis_mouth.setProperty("voice", voices[1].id)
def Jarvis_brain(audio):
print("Jarvis: " + audio)
Jarvis_mouth.say(audio)
Jarvis_mouth.runAndWait()
def sign_in(meetingid, pswd):
#Opens up the zoom app
#change the path specific to your computer
#If on windows use below line for opening zoom
#subprocess.call('C:\\myprogram.exe')
#If on mac / Linux use below line for opening zoom
subprocess.call(["C:\\Users\\PC\\AppData\\Roaming\\Zoom\\bin\\Zoom.exe"])
time.sleep(1)
#clicks the join button
join_btn = pyautogui.locateCenterOnScreen('join_button.png')
pyautogui.moveTo(join_btn)
pyautogui.click()
# Type the meeting ID
meeting_id_btn = pyautogui.locateCenterOnScreen('meeting_id_button.png')
pyautogui.moveTo(meeting_id_btn)
pyautogui.click()
pyautogui.write(meetingid)
# Hits the join button
join_btn = pyautogui.locateCenterOnScreen('join_btn.png')
pyautogui.moveTo(join_btn)
pyautogui.click()
time.sleep(2)
#Types the password and hits enter
meeting_pswd_btn = pyautogui.locateCenterOnScreen('meeting_pswd1.png')
pyautogui.moveTo(meeting_pswd_btn)
pyautogui.click()
pyautogui.write(pswd)
pyautogui.press('enter')
# Reading the file
df = pd.read_csv('timings.csv')
while True:
# checking of the current time exists in our csv file
now = datetime.now().strftime("%A %H:%M")
if now in str(df['timings1']) and now in str(df['timings2']):
row1 = df.loc[df['timings1'] >= now] # if the time is 8 PM or more it will open Zoom and enter the code
row2 = df.loc[df['timings2'] <= now] ## if the time is 8:10 PM or less it will open Zoom and enter the code
m_id = str(row1,row2.iloc[0,1])
m_pswd = str(row1,row2.iloc[0,2])
sign_in(m_id, m_pswd)
Jarvis_brain('signed in')
time.sleep(60)
else:
Jarvis_brain("error. please try again")
Here is the file to set the schedule: timings.csv
timings1 and timing2: day and time
timings1, timings2, meetingid, meetpswd
Monday 20:06, Monday 20:40, 456 884 2391, 12345670
Thanks for helping me.
A:
Assuming you do have a working zoom link e.g.
https://yourCompanyName.zoom.us/j/1234567890
To launch zoom on MacOS (tested) and Linux (untested) you can do a C system(...) call like this:
system("open -a zoom.us 'https://yourCompanyName.zoom.us/j/1234567890'");
Which in Python would translate to:
import subprocess
subprocess.run(["open", "-a", "zoom.us", "https://yourCompanyName.zoom.us/j/1234567890"])
But judging from your code you are clearly on Windows environment.
Based on https://superuser.com/a/1563359 this should work:
%APPDATA%\Zoom\bin\Zoom.exe "url=https://yourCompanyName.zoom.us/j/1234567890"
i.e. in Python
import subprocess
subprocess.run(["%APPDATA%\Zoom\bin\Zoom.exe", "\"url=https://yourCompanyName.zoom.us/j/1234567890\""])
You should now have the ability to programmatically launch your Zoom room.
I leave the time trigger as an exercise.
Also take into account the process of launching Zoom and connecting also takes some time so it would be advisable to give it a head start to be on schedule.
There is no error handling here if anything goes wrong. But that's beyond this question.
|
How to make a bot to automatically open Zoom meeting and enter class when it's time in the schedule by using Python?
|
I am making a bot to automatically open Zoom meeting and enter the class when it's time. I set the time when it is from 8:00 PM to 8:10 PM the computer will automatically open Zoom and enter the code and password, but it's not running. I have tried many ways but nothing happen. So hopefully someone can help to fix this.
Thank you very much!
Here is my code
import subprocess
import pyautogui
import time
import pandas as pd
from datetime import datetime
import pyttsx3
import os
#---------------------------------------------------------
# Robot speech
# Jarvis_brain = speak
# Jarvis_mouth = engine
assistant= "Jarvis" # Iron man Fan
Jarvis_mouth = pyttsx3.init()
Jarvis_mouth.setProperty("rate", 140)
voices = Jarvis_mouth.getProperty("voices")
Jarvis_mouth.setProperty("voice", voices[1].id)
def Jarvis_brain(audio):
print("Jarvis: " + audio)
Jarvis_mouth.say(audio)
Jarvis_mouth.runAndWait()
def sign_in(meetingid, pswd):
#Opens up the zoom app
#change the path specific to your computer
#If on windows use below line for opening zoom
#subprocess.call('C:\\myprogram.exe')
#If on mac / Linux use below line for opening zoom
subprocess.call(["C:\\Users\\PC\\AppData\\Roaming\\Zoom\\bin\\Zoom.exe"])
time.sleep(1)
#clicks the join button
join_btn = pyautogui.locateCenterOnScreen('join_button.png')
pyautogui.moveTo(join_btn)
pyautogui.click()
# Type the meeting ID
meeting_id_btn = pyautogui.locateCenterOnScreen('meeting_id_button.png')
pyautogui.moveTo(meeting_id_btn)
pyautogui.click()
pyautogui.write(meetingid)
# Hits the join button
join_btn = pyautogui.locateCenterOnScreen('join_btn.png')
pyautogui.moveTo(join_btn)
pyautogui.click()
time.sleep(2)
#Types the password and hits enter
meeting_pswd_btn = pyautogui.locateCenterOnScreen('meeting_pswd1.png')
pyautogui.moveTo(meeting_pswd_btn)
pyautogui.click()
pyautogui.write(pswd)
pyautogui.press('enter')
# Reading the file
df = pd.read_csv('timings.csv')
while True:
# checking of the current time exists in our csv file
now = datetime.now().strftime("%A %H:%M")
if now in str(df['timings1']) and now in str(df['timings2']):
row1 = df.loc[df['timings1'] >= now] # if the time is 8 PM or more it will open Zoom and enter the code
row2 = df.loc[df['timings2'] <= now] ## if the time is 8:10 PM or less it will open Zoom and enter the code
m_id = str(row1,row2.iloc[0,1])
m_pswd = str(row1,row2.iloc[0,2])
sign_in(m_id, m_pswd)
Jarvis_brain('signed in')
time.sleep(60)
else:
Jarvis_brain("error. please try again")
Here is the file to set the schedule: timings.csv
timings1 and timing2: day and time
timings1, timings2, meetingid, meetpswd
Monday 20:06, Monday 20:40, 456 884 2391, 12345670
Thanks for helping me.
|
[
"Assuming you do have a working zoom link e.g.\nhttps://yourCompanyName.zoom.us/j/1234567890\n\nTo launch zoom on MacOS (tested) and Linux (untested) you can do a C system(...) call like this:\nsystem(\"open -a zoom.us 'https://yourCompanyName.zoom.us/j/1234567890'\");\n\nWhich in Python would translate to:\nimport subprocess\nsubprocess.run([\"open\", \"-a\", \"zoom.us\", \"https://yourCompanyName.zoom.us/j/1234567890\"])\n\nBut judging from your code you are clearly on Windows environment.\nBased on https://superuser.com/a/1563359 this should work:\n%APPDATA%\\Zoom\\bin\\Zoom.exe \"url=https://yourCompanyName.zoom.us/j/1234567890\"\n\ni.e. in Python\nimport subprocess\nsubprocess.run([\"%APPDATA%\\Zoom\\bin\\Zoom.exe\", \"\\\"url=https://yourCompanyName.zoom.us/j/1234567890\\\"\"])\n\nYou should now have the ability to programmatically launch your Zoom room.\nI leave the time trigger as an exercise.\nAlso take into account the process of launching Zoom and connecting also takes some time so it would be advisable to give it a head start to be on schedule.\nThere is no error handling here if anything goes wrong. But that's beyond this question.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_3.9",
"python_3.x"
] |
stackoverflow_0068985648_python_python_3.9_python_3.x.txt
|
Q:
Can we use `bool or None` instead of Union[bool, None] for type annotating?
I'm using python3.8 and have a variable which can be True, False or None. For type-hinting this variable I know I can use Union for variables where they may have divergent types. But personally I don't prefer using Union. I think it's easier to use the newer python syntax bool | None but it's not available in python3.8 (I think it's for 3.9 or 3.10). I want to know is it correct to use bool or None for this scenario?
At first I thought it's wrong, because bool or None will be eventually executed and become bool.
>>> bool or None
<class 'bool'>
But pycharm's type checker didn't complain about it. Is this correct?
A:
You've answered on your issue youself.
bool or None # returns bool type
bool | None # just equal to Union[bool, None] for Python 3.10+
# and provides cleanest syntax for Type Hinting
Sure you can't use this in Python 3.9 or lower, because this structure (bitwise or) is not implemented. If you exactly wants to use type hints in Python 3.8, you have to use typing.Union
A:
@smnenko's answer is correct, but even in Python 3.10 you can also do
from typing import Optional
x: Optional[bool]
Optional[T] is the same as Union[T, None].
|
Can we use `bool or None` instead of Union[bool, None] for type annotating?
|
I'm using python3.8 and have a variable which can be True, False or None. For type-hinting this variable I know I can use Union for variables where they may have divergent types. But personally I don't prefer using Union. I think it's easier to use the newer python syntax bool | None but it's not available in python3.8 (I think it's for 3.9 or 3.10). I want to know is it correct to use bool or None for this scenario?
At first I thought it's wrong, because bool or None will be eventually executed and become bool.
>>> bool or None
<class 'bool'>
But pycharm's type checker didn't complain about it. Is this correct?
|
[
"You've answered on your issue youself.\nbool or None # returns bool type\nbool | None # just equal to Union[bool, None] for Python 3.10+ \n # and provides cleanest syntax for Type Hinting\n\nSure you can't use this in Python 3.9 or lower, because this structure (bitwise or) is not implemented. If you exactly wants to use type hints in Python 3.8, you have to use typing.Union\n",
"@smnenko's answer is correct, but even in Python 3.10 you can also do\nfrom typing import Optional\n\nx: Optional[bool]\n\nOptional[T] is the same as Union[T, None].\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"type_annotation"
] |
stackoverflow_0074580752_python_type_annotation.txt
|
Q:
Consider a set of items I = {1, 2,..., N}. What is the size of all possible valid itemsets?
I have been thinking about this question but have not come up with an answer. Is there a subject area I could look at that my question relates to? Question is as mentioned in title. Are there any ways I could implement perhaps a small python program to help myself in this case using items in a dataframe? I feel like this is a question where the answer may be reachable without coding.
Any help appreciated
A:
I think this is covered thoroughly in most introductory probability texts in the sections covering combinatorics. I'd look up topics like counting for combinations and permutations to get some foundational knowledge on this
|
Consider a set of items I = {1, 2,..., N}. What is the size of all possible valid itemsets?
|
I have been thinking about this question but have not come up with an answer. Is there a subject area I could look at that my question relates to? Question is as mentioned in title. Are there any ways I could implement perhaps a small python program to help myself in this case using items in a dataframe? I feel like this is a question where the answer may be reachable without coding.
Any help appreciated
|
[
"I think this is covered thoroughly in most introductory probability texts in the sections covering combinatorics. I'd look up topics like counting for combinations and permutations to get some foundational knowledge on this\n"
] |
[
0
] |
[] |
[] |
[
"associations",
"data_mining",
"dataframe",
"python",
"set"
] |
stackoverflow_0074579091_associations_data_mining_dataframe_python_set.txt
|
Q:
Django Rest how to save current user when creating an new blog?
When I am creating an blog post I also want to automatically save the current user without selecting the user manually as a blog author.
here is my code:
models.py:
class Blog(models.Model):
author = models.ForeignKey(
settings.AUTH_USER_MODEL, on_delete=models.CASCADE, blank=True, null=True)
blog_title = models.CharField(max_length=200, unique=True)
serializers.py
class BlogSerializer(serializers.ModelSerializer):
class Meta:
model = Blog
views.py
class BlogViewSet(viewsets.ModelViewSet):
queryset = Blog.objects.all().order_by('-id')
serializer_class = BlogSerializer
pagination_class = BlogPagination
lookup_field = 'blog_slug'
def get_permissions(self):
if self.action == 'retrieve':
permission_classes = [IsOwnerOrReadOnly]
elif self.action == 'list':
permission_classes = [IsOwnerOrReadOnly]
else:
permission_classes = [IsOwnerOrReadOnly & IsAuthorGroup]
return [permission() for permission in permission_classes]
A:
You can modify your serializer like below. It picks up the user from the request context and creates the blog.
class BlogSerializer(serializers.ModelSerializer):
class Meta:
model = Blog
fields = "__all__"
read_only_fields = ["author"]
def create(self, validated_data):
user = self.context["request"].user
blog = Blog.objects.create(**validated_data, author=user)
return blog
|
Django Rest how to save current user when creating an new blog?
|
When I am creating an blog post I also want to automatically save the current user without selecting the user manually as a blog author.
here is my code:
models.py:
class Blog(models.Model):
author = models.ForeignKey(
settings.AUTH_USER_MODEL, on_delete=models.CASCADE, blank=True, null=True)
blog_title = models.CharField(max_length=200, unique=True)
serializers.py
class BlogSerializer(serializers.ModelSerializer):
class Meta:
model = Blog
views.py
class BlogViewSet(viewsets.ModelViewSet):
queryset = Blog.objects.all().order_by('-id')
serializer_class = BlogSerializer
pagination_class = BlogPagination
lookup_field = 'blog_slug'
def get_permissions(self):
if self.action == 'retrieve':
permission_classes = [IsOwnerOrReadOnly]
elif self.action == 'list':
permission_classes = [IsOwnerOrReadOnly]
else:
permission_classes = [IsOwnerOrReadOnly & IsAuthorGroup]
return [permission() for permission in permission_classes]
|
[
"You can modify your serializer like below. It picks up the user from the request context and creates the blog.\nclass BlogSerializer(serializers.ModelSerializer):\n class Meta:\n model = Blog\n fields = \"__all__\"\n read_only_fields = [\"author\"]\n\n def create(self, validated_data):\n user = self.context[\"request\"].user\n blog = Blog.objects.create(**validated_data, author=user)\n\n return blog\n\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_rest_framework",
"python",
"python_3.x"
] |
stackoverflow_0074580843_django_django_rest_framework_python_python_3.x.txt
|
Q:
How could I create a system for my trading bots
I want to create a system where I can manage a privet trading bots ,I don't Know how to architects it
with OOP or create a file for each bot
I will store the strategies in one file so I can import it
create a class for the bot that have stop and start methods
this is all easy , what I don't Know to do is how to create many bots Objects from this class and manage their statues STOPPES or ACTIVE
I will manage it through GUI(start or stop it) , and show some info from database
the only problem is how I will manage the objects , like I'm a user(object)that do a functions and has info how
does I create an object -- store its info in DB --- start method activated --- ? then what I don't understand
how would you plan this system ?
A:
You can try to write bots in a separate projects and run them via a subprocess from main file. Just create main.py with GUI and set functions that calls or stops your bots.
p = subprocess.Popen(['python3', 'bot1.py']) # on start button
p.kill() # on stop button
Also you can track subprocess activity using Python. See subprocess docs for other details.
|
How could I create a system for my trading bots
|
I want to create a system where I can manage a privet trading bots ,I don't Know how to architects it
with OOP or create a file for each bot
I will store the strategies in one file so I can import it
create a class for the bot that have stop and start methods
this is all easy , what I don't Know to do is how to create many bots Objects from this class and manage their statues STOPPES or ACTIVE
I will manage it through GUI(start or stop it) , and show some info from database
the only problem is how I will manage the objects , like I'm a user(object)that do a functions and has info how
does I create an object -- store its info in DB --- start method activated --- ? then what I don't understand
how would you plan this system ?
|
[
"You can try to write bots in a separate projects and run them via a subprocess from main file. Just create main.py with GUI and set functions that calls or stops your bots.\np = subprocess.Popen(['python3', 'bot1.py']) # on start button\np.kill() # on stop button\n\nAlso you can track subprocess activity using Python. See subprocess docs for other details.\n"
] |
[
1
] |
[] |
[] |
[
"architecture",
"class",
"object",
"oop",
"python"
] |
stackoverflow_0074580855_architecture_class_object_oop_python.txt
|
Q:
how to parse the xml thru xml parser by using xml.etree.ElementTree with the below sample
trying to parse below XML which seems to be a different model.
<?xml version="1.0" encoding="UTF-8"?>
<book>
<item neighbor-name="ABC-LENGTH" pos="1" size="8" type="INT"/>
<item neighbor-name="ABC-CODE" pos="9" size="3" type="STRING"/>
<item neighbor-name="DEF-IND" pos="12" size="1" type="STRING"/>
<item neighbor-name="JKL-ID" pos="13" size="15" type="STRING"/>
<item neighbor-name="KLN-DATE" pos="28" size="8" type="STRING" red="true">
<item neighbor-name="KER-YR" pos="28" size="4" type="INT"/>
<item neighbor-name="KER-MO" pos="32" size="2" type="INT"/>
<item neighbor-name="KER-DA" pos="34" size="2" type="INT"/>
</item>
</book>
Trying to pull only the assigned values thru the parser.
ABC-LENGTH 1 8 INT
ABC-CODE 9 3 STRING
.
.
KLN-DATE 28 8 STRING true
.
.
But , nothing seems to be working. Tried all the options like tag,attribute etc.. but each time getting return code as zero , but no output.
Thanks in advance.
A:
I copied your XML in a file named "book.xml".
Than you can easy walk through with .iter() and grap the values of the attributes with .get():
import pandas as pd
import xml.etree.ElementTree as ET
tree = ET.parse("book.xml")
root = tree.getroot()
columns = ["neighbor-name", "pos", "size", "type", "red"]
data = []
for node in root.iter("item"):
a = [node.get("neighbor-name"), node.get("pos"), node.get("size"), node.get("type"), node.get("red")]
data.append(a)
df = pd.DataFrame(data, columns = columns)
print(df)
Output:
neighbor-name pos size type red
0 ABC-LENGTH 1 8 INT None
1 ABC-CODE 9 3 STRING None
2 DEF-IND 12 1 STRING None
3 JKL-ID 13 15 STRING None
4 KLN-DATE 28 8 STRING true
5 KER-YR 28 4 INT None
6 KER-MO 32 2 INT None
7 KER-DA 34 2 INT None
|
how to parse the xml thru xml parser by using xml.etree.ElementTree with the below sample
|
trying to parse below XML which seems to be a different model.
<?xml version="1.0" encoding="UTF-8"?>
<book>
<item neighbor-name="ABC-LENGTH" pos="1" size="8" type="INT"/>
<item neighbor-name="ABC-CODE" pos="9" size="3" type="STRING"/>
<item neighbor-name="DEF-IND" pos="12" size="1" type="STRING"/>
<item neighbor-name="JKL-ID" pos="13" size="15" type="STRING"/>
<item neighbor-name="KLN-DATE" pos="28" size="8" type="STRING" red="true">
<item neighbor-name="KER-YR" pos="28" size="4" type="INT"/>
<item neighbor-name="KER-MO" pos="32" size="2" type="INT"/>
<item neighbor-name="KER-DA" pos="34" size="2" type="INT"/>
</item>
</book>
Trying to pull only the assigned values thru the parser.
ABC-LENGTH 1 8 INT
ABC-CODE 9 3 STRING
.
.
KLN-DATE 28 8 STRING true
.
.
But , nothing seems to be working. Tried all the options like tag,attribute etc.. but each time getting return code as zero , but no output.
Thanks in advance.
|
[
"I copied your XML in a file named \"book.xml\".\nThan you can easy walk through with .iter() and grap the values of the attributes with .get():\nimport pandas as pd\nimport xml.etree.ElementTree as ET \n\ntree = ET.parse(\"book.xml\")\nroot = tree.getroot()\n\ncolumns = [\"neighbor-name\", \"pos\", \"size\", \"type\", \"red\"]\ndata = []\n\nfor node in root.iter(\"item\"):\n a = [node.get(\"neighbor-name\"), node.get(\"pos\"), node.get(\"size\"), node.get(\"type\"), node.get(\"red\")]\n data.append(a)\n\ndf = pd.DataFrame(data, columns = columns)\nprint(df)\n\nOutput:\n neighbor-name pos size type red\n0 ABC-LENGTH 1 8 INT None\n1 ABC-CODE 9 3 STRING None\n2 DEF-IND 12 1 STRING None\n3 JKL-ID 13 15 STRING None\n4 KLN-DATE 28 8 STRING true\n5 KER-YR 28 4 INT None\n6 KER-MO 32 2 INT None\n7 KER-DA 34 2 INT None\n\n"
] |
[
1
] |
[] |
[] |
[
"parsing",
"python",
"xml"
] |
stackoverflow_0074555021_parsing_python_xml.txt
|
Q:
Receive all messages in AWS SQS queue using boto library until queue is empty
I have a case where a script is writing all unused volume ids to AWS SQS queue and after some time, we need to receive all those messages with volume ids and delete those volumes. Is there a way to achieve this using python boto library?
Receive all messages in AWS SQS queue using boto library until queue is empty
A:
Your Python script should use the boto3 receive_message() command:
Retrieves one or more messages (up to 10), from the specified queue.
Once your program has finished processing a message, it should call delete_message() or delete_message_batch() to delete the message, passing the ReceiptHandle for each message that was obtained from the receive_message() call.
To process all messages in the queue, continue receiving and deleting messages within a loop.
|
Receive all messages in AWS SQS queue using boto library until queue is empty
|
I have a case where a script is writing all unused volume ids to AWS SQS queue and after some time, we need to receive all those messages with volume ids and delete those volumes. Is there a way to achieve this using python boto library?
Receive all messages in AWS SQS queue using boto library until queue is empty
|
[
"Your Python script should use the boto3 receive_message() command:\n\nRetrieves one or more messages (up to 10), from the specified queue.\n\nOnce your program has finished processing a message, it should call delete_message() or delete_message_batch() to delete the message, passing the ReceiptHandle for each message that was obtained from the receive_message() call.\nTo process all messages in the queue, continue receiving and deleting messages within a loop.\n"
] |
[
0
] |
[] |
[] |
[
"amazon_sqs",
"amazon_web_services",
"boto3",
"python"
] |
stackoverflow_0074578070_amazon_sqs_amazon_web_services_boto3_python.txt
|
Q:
Spotipy (sp.track() specifically) takes too long to run
I am trying to extract the release data, explicit flag and popularity score of approximately 18,000 songs. I want to append these results to my data frame
Initially, I tried this. -
for i,track in enumerate(df['uri']):
release_dates.append(sp.track(track)['album']['release_date'])
But I took too long to run, so I assumed that it was the size of the dataset that was the problem.
Then I tried to run it over subsets of 50 songs each -
updated_popularity, explicit_flags, release_dates = [], [], []
for i in range(0,10000,50):
print("entered first for loop")
results = sp.track(df['uri'][i])
print("got track results")
for i, t in enumerate(results):
print("Second loop: Track = ", t)
updated_popularity.append(t['popularity'])
explicit_flags.append(t['explicit'])
release_dates.append(t['album']['release_date'])
print("Exited second loop\n")
However, my code has been running for hours now with no results. I've been stuck on this for a while and any help would be appreciated!
A:
It's much faster to request 50 tracks at once with sp.tracks(uri_list)
# function to divide a list of uris (or ids) into chuncks of 50.
chunker = lambda y, x: [y[i : i + x] for i in range(0, len(y), x)]
# using the function
uri_chunks = chunker(uri_list, 50)
updated_popularity, explicit_flags, release_dates = [], [], []
for chunk in uri_chunks:
print("entered first for loop")
results = sp.tracks(chunk)
print("got tracks results")
for t in results["tracks"]:
updated_popularity.append(t['popularity'])
explicit_flags.append(t['explicit'])
release_dates.append(t['album']['release_date'])
print("Exited second loop\n")
print(updated_popularity)
|
Spotipy (sp.track() specifically) takes too long to run
|
I am trying to extract the release data, explicit flag and popularity score of approximately 18,000 songs. I want to append these results to my data frame
Initially, I tried this. -
for i,track in enumerate(df['uri']):
release_dates.append(sp.track(track)['album']['release_date'])
But I took too long to run, so I assumed that it was the size of the dataset that was the problem.
Then I tried to run it over subsets of 50 songs each -
updated_popularity, explicit_flags, release_dates = [], [], []
for i in range(0,10000,50):
print("entered first for loop")
results = sp.track(df['uri'][i])
print("got track results")
for i, t in enumerate(results):
print("Second loop: Track = ", t)
updated_popularity.append(t['popularity'])
explicit_flags.append(t['explicit'])
release_dates.append(t['album']['release_date'])
print("Exited second loop\n")
However, my code has been running for hours now with no results. I've been stuck on this for a while and any help would be appreciated!
|
[
"It's much faster to request 50 tracks at once with sp.tracks(uri_list)\n# function to divide a list of uris (or ids) into chuncks of 50.\nchunker = lambda y, x: [y[i : i + x] for i in range(0, len(y), x)]\n\n# using the function\nuri_chunks = chunker(uri_list, 50)\n\nupdated_popularity, explicit_flags, release_dates = [], [], []\n\nfor chunk in uri_chunks:\n print(\"entered first for loop\")\n results = sp.tracks(chunk)\n print(\"got tracks results\")\n for t in results[\"tracks\"]:\n updated_popularity.append(t['popularity'])\n explicit_flags.append(t['explicit'])\n release_dates.append(t['album']['release_date'])\n print(\"Exited second loop\\n\")\n\nprint(updated_popularity)\n\n"
] |
[
0
] |
[] |
[] |
[
"machine_learning",
"python",
"spotify",
"spotify_app",
"spotipy"
] |
stackoverflow_0074579070_machine_learning_python_spotify_spotify_app_spotipy.txt
|
Q:
Python: Is order preserved when iterating a tuple?
In Python, if I run the code:
T=('A','B','C','D')
D={}
i=0
for item in T:
D[i]=item
i=i+1
Can I be sure that D will be organized as:
D = {0:'A', 1:'B', 2:'C', 3:'D'}
I know that tuples' order cannot be changed because they are immutable, but am I guaranteed that it will always be iterated in order as well?
A:
Yes, tuples are ordered and iteration follows that order. Guaranteed™.
You can generate your D in one expression with enumerate() to produce the indices:
D = dict(enumerate(T))
That's because enumerate() produces (index, value) tuples, and dict() accepts a sequence of (key, value) tuples to produce the dictionary:
>>> T = ('A', 'B', 'C', 'D')
>>> dict(enumerate(T))
{0: 'A', 1: 'B', 2: 'C', 3: 'D'}
A:
Since tuples are sequences (due to The standard type hierarchy) and are ordered by implementation, it has the __iter__() method to comply with the Iterator protocol which means that each next value of the tuple just yielded in the same ordered fashion because point the same object.
|
Python: Is order preserved when iterating a tuple?
|
In Python, if I run the code:
T=('A','B','C','D')
D={}
i=0
for item in T:
D[i]=item
i=i+1
Can I be sure that D will be organized as:
D = {0:'A', 1:'B', 2:'C', 3:'D'}
I know that tuples' order cannot be changed because they are immutable, but am I guaranteed that it will always be iterated in order as well?
|
[
"Yes, tuples are ordered and iteration follows that order. Guaranteed™.\nYou can generate your D in one expression with enumerate() to produce the indices:\nD = dict(enumerate(T))\n\nThat's because enumerate() produces (index, value) tuples, and dict() accepts a sequence of (key, value) tuples to produce the dictionary:\n>>> T = ('A', 'B', 'C', 'D')\n>>> dict(enumerate(T))\n{0: 'A', 1: 'B', 2: 'C', 3: 'D'}\n\n",
"Since tuples are sequences (due to The standard type hierarchy) and are ordered by implementation, it has the __iter__() method to comply with the Iterator protocol which means that each next value of the tuple just yielded in the same ordered fashion because point the same object.\n"
] |
[
22,
0
] |
[] |
[] |
[
"loops",
"python",
"tuples"
] |
stackoverflow_0025670989_loops_python_tuples.txt
|
Q:
yes/no loop not working properly when i used OR keyword
I was using a yes/no loop to make an infinite loop which would end when user enters no or No but the program was not working properly. I know the what the error is but i don't know why is it occuring like this. Can anyone tell how to fix the error without changing my initial program
when i use this code it works but when i use if a=='yes' or 'Yes' and elif a=='no' or 'No' in the somehow the output shows the print statement of the if statement even when i enter no.
My program without the OR condition
while True:
a = input("Enter yes/no to continue")
if a=='yes':
print("enter the program")
elif a=='no':
print("EXIT")
break
else:
print("Enter either yes/no")
My initial program with OR condition
while True:
a = input("Enter yes/no to continue")
if a=='yes' or 'Yes':
print("enter the program")
elif a=='no' or 'No':
print("EXIT")
break
else:
print("Enter either yes/no")
A:
You have a few options:
while True:
a = input("Enter yes/no to continue")
if a.lower()=='yes':
print("enter the program")
elif a.lower()=='no':
print("EXIT")
break
else:
print("Enter either yes/no")
or you can do this:
while True:
a = input("Enter yes/no to continue")
if a=='yes' or a=='Yes':
print("enter the program")
elif a=='no' or a=='No':
print("EXIT")
break
else:
print("Enter either yes/no")
A:
In an or statement you have to compare a with the value in all expressions:
while True:
a = input("Enter yes/no to continue")
if a == 'yes' or a == 'Yes':
print("enter the program")
elif a == 'no' or a == 'No':
print("EXIT")
break
else:
print("Enter either yes/no")
A more pythonic way is to use .lower() in your case. For example:
a == 'yes' or a == 'Yes' # is equeal to:
a.lower() == 'yes'
A:
When you use or, you should write complete condition again.
Here if you want to check a=="Yes" also, you should declare it completely.
if a == 'yes' or a == 'Yes':
...
You can also use this:
if a.lower() == 'yes'
...
|
yes/no loop not working properly when i used OR keyword
|
I was using a yes/no loop to make an infinite loop which would end when user enters no or No but the program was not working properly. I know the what the error is but i don't know why is it occuring like this. Can anyone tell how to fix the error without changing my initial program
when i use this code it works but when i use if a=='yes' or 'Yes' and elif a=='no' or 'No' in the somehow the output shows the print statement of the if statement even when i enter no.
My program without the OR condition
while True:
a = input("Enter yes/no to continue")
if a=='yes':
print("enter the program")
elif a=='no':
print("EXIT")
break
else:
print("Enter either yes/no")
My initial program with OR condition
while True:
a = input("Enter yes/no to continue")
if a=='yes' or 'Yes':
print("enter the program")
elif a=='no' or 'No':
print("EXIT")
break
else:
print("Enter either yes/no")
|
[
"You have a few options:\nwhile True:\n a = input(\"Enter yes/no to continue\")\n if a.lower()=='yes':\n print(\"enter the program\")\n elif a.lower()=='no':\n print(\"EXIT\")\n break\n else:\n print(\"Enter either yes/no\")\n\nor you can do this:\nwhile True:\n a = input(\"Enter yes/no to continue\")\n if a=='yes' or a=='Yes':\n print(\"enter the program\")\n elif a=='no' or a=='No':\n print(\"EXIT\")\n break\n else:\n print(\"Enter either yes/no\")\n\n",
"In an or statement you have to compare a with the value in all expressions:\nwhile True:\n a = input(\"Enter yes/no to continue\")\n if a == 'yes' or a == 'Yes':\n print(\"enter the program\")\n elif a == 'no' or a == 'No':\n print(\"EXIT\")\n break\n else:\n print(\"Enter either yes/no\")\n\nA more pythonic way is to use .lower() in your case. For example:\na == 'yes' or a == 'Yes' # is equeal to:\na.lower() == 'yes'\n\n",
"When you use or, you should write complete condition again.\nHere if you want to check a==\"Yes\" also, you should declare it completely.\nif a == 'yes' or a == 'Yes':\n...\n\nYou can also use this:\nif a.lower() == 'yes'\n...\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074580988_python.txt
|
Q:
Setting command parameter descriptions in discord.py
I am making a command in a bot to create a profile for a user. It is working fine, but I would like the description of the "name" parameter to say "What would you like to be called?".
Here is the code I currently have:
import discord
from discord import app_commands
@tree.command(name="makeprofile", description="Make your own profile!", guild=discord.Object(id=000000000000))
async def make_profile(interaction, preferred_name: str, pronouns: str):
db.insert({'id': interaction.user.id, 'name': preferred_name, 'pronouns': pronouns})
A:
From the documentation:
@discord.app_commands.describe(**parameters)
Describes the given parameters by their name using the key of the keyword argument as the name.
So in your case:
@app_commands.describe(preferred_name = "What would you like to be called?")
|
Setting command parameter descriptions in discord.py
|
I am making a command in a bot to create a profile for a user. It is working fine, but I would like the description of the "name" parameter to say "What would you like to be called?".
Here is the code I currently have:
import discord
from discord import app_commands
@tree.command(name="makeprofile", description="Make your own profile!", guild=discord.Object(id=000000000000))
async def make_profile(interaction, preferred_name: str, pronouns: str):
db.insert({'id': interaction.user.id, 'name': preferred_name, 'pronouns': pronouns})
|
[
"From the documentation:\n\n@discord.app_commands.describe(**parameters)\n\nDescribes the given parameters by their name using the key of the keyword argument as the name.\n\nSo in your case:\n@app_commands.describe(preferred_name = \"What would you like to be called?\")\n\n"
] |
[
2
] |
[] |
[] |
[
"discord",
"discord.py",
"field_description",
"parameters",
"python"
] |
stackoverflow_0074580979_discord_discord.py_field_description_parameters_python.txt
|
Q:
numpy.core._exceptions.MemoryError: Unable to allocate space for array
error
numpy.core._exceptions.MemoryError: Unable to allocate 362. GiB for an array with shape (2700000, 18000) and data type float64
https://www.kaggle.com/datasets/netflix-inc/netflix-prize-data
im working on this netflix prize data set which has a lot of movies and user ids my work is to apply matrix factorization so i need to create a matrix of 2700000 X 18000 which stores int in range 1 to 5 I tried many ways but still unable to create a matrix of that size tried forcing it to be uint8 but the shape of the matrix which im getting is wrong please help me solve this
A:
Your 3 million by 20000 matrix better be sparse or you will need a computer with a very large amount of memory. One copy of a full real matrix that size will require a few hundreds GB or even a few TB of contiguous space.
Exploit more efficient matrix representation, like sparse one scipy.sparse.csc_matrix. The question is if the matrix has most of 0 scores.
Modify your algorithm to work on submatrices.
|
numpy.core._exceptions.MemoryError: Unable to allocate space for array
|
error
numpy.core._exceptions.MemoryError: Unable to allocate 362. GiB for an array with shape (2700000, 18000) and data type float64
https://www.kaggle.com/datasets/netflix-inc/netflix-prize-data
im working on this netflix prize data set which has a lot of movies and user ids my work is to apply matrix factorization so i need to create a matrix of 2700000 X 18000 which stores int in range 1 to 5 I tried many ways but still unable to create a matrix of that size tried forcing it to be uint8 but the shape of the matrix which im getting is wrong please help me solve this
|
[
"Your 3 million by 20000 matrix better be sparse or you will need a computer with a very large amount of memory. One copy of a full real matrix that size will require a few hundreds GB or even a few TB of contiguous space.\n\nExploit more efficient matrix representation, like sparse one scipy.sparse.csc_matrix. The question is if the matrix has most of 0 scores.\nModify your algorithm to work on submatrices.\n\n"
] |
[
1
] |
[] |
[] |
[
"kaggle",
"machine_learning",
"matrix_factorization",
"numpy",
"python"
] |
stackoverflow_0074580778_kaggle_machine_learning_matrix_factorization_numpy_python.txt
|
Q:
Are dictionaries ordered in Python 3.6+?
Dictionaries are insertion ordered as of Python 3.6. It is described as a CPython implementation detail rather than a language feature. The documentation states:
dict() now uses a “compact” representation pioneered by PyPy. The memory usage of the new dict() is between 20% and 25% smaller compared to Python 3.5. PEP 468 (Preserving the order of **kwargs in a function.) is implemented by this. The order-preserving aspect of this new implementation is considered an implementation detail and should not be relied upon (this may change in the future, but it is desired to have this new dict implementation in the language for a few releases before changing the language spec to mandate order-preserving semantics for all current and future Python implementations; this also helps preserve backwards-compatibility with older versions of the language where random iteration order is still in effect, e.g. Python 3.5). (Contributed by INADA Naoki in issue 27350. Idea originally suggested by Raymond Hettinger.)
How does the new dictionary implementation perform better than the older one while preserving element order?
Update December 2017: dicts retaining insertion order is guaranteed for Python 3.7
A:
Are dictionaries ordered in Python 3.6+?
They are insertion ordered[1].
As of Python 3.6, for the CPython implementation of Python, dictionaries remember the order of items inserted. This is considered an implementation detail in Python 3.6; you need to use OrderedDict if you want insertion ordering that's guaranteed across other implementations of Python (and other ordered behavior[1]).
As of Python 3.7, this is a guaranteed language feature, not merely an implementation detail. From a python-dev message by GvR:
Make it so. "Dict keeps insertion order" is the ruling. Thanks!
This simply means that you can depend on it. Other implementations of Python must also offer an insertion ordered dictionary if they wish to be a conforming implementation of Python 3.7.
How does the Python 3.6 dictionary implementation perform better[2] than the older one while preserving element order?
Essentially, by keeping two arrays.
The first array, dk_entries, holds the entries (of type PyDictKeyEntry) for the dictionary in the order that they were inserted. Preserving order is achieved by this being an append only array where new items are always inserted at the end (insertion order).
The second, dk_indices, holds the indices for the dk_entries array (that is, values that indicate the position of the corresponding entry in dk_entries). This array acts as the hash table. When a key is hashed it leads to one of the indices stored in dk_indices and the corresponding entry is fetched by indexing dk_entries. Since only indices are kept, the type of this array depends on the overall size of the dictionary (ranging from type int8_t(1 byte) to int32_t/int64_t (4/8 bytes) on 32/64 bit builds)
In the previous implementation, a sparse array of type PyDictKeyEntry and size dk_size had to be allocated; unfortunately, it also resulted in a lot of empty space since that array was not allowed to be more than 2/3 * dk_size full for performance reasons. (and the empty space still had PyDictKeyEntry size!).
This is not the case now since only the required entries are stored (those that have been inserted) and a sparse array of type intX_t (X depending on dict size) 2/3 * dk_sizes full is kept. The empty space changed from type PyDictKeyEntry to intX_t.
So, obviously, creating a sparse array of type PyDictKeyEntry is much more memory demanding than a sparse array for storing ints.
You can see the full conversation on Python-Dev regarding this feature if interested, it is a good read.
In the original proposal made by Raymond Hettinger, a visualization of the data structures used can be seen which captures the gist of the idea.
For example, the dictionary:
d = {'timmy': 'red', 'barry': 'green', 'guido': 'blue'}
is currently stored as [keyhash, key, value]:
entries = [['--', '--', '--'],
[-8522787127447073495, 'barry', 'green'],
['--', '--', '--'],
['--', '--', '--'],
['--', '--', '--'],
[-9092791511155847987, 'timmy', 'red'],
['--', '--', '--'],
[-6480567542315338377, 'guido', 'blue']]
Instead, the data should be organized as follows:
indices = [None, 1, None, None, None, 0, None, 2]
entries = [[-9092791511155847987, 'timmy', 'red'],
[-8522787127447073495, 'barry', 'green'],
[-6480567542315338377, 'guido', 'blue']]
As you can visually now see, in the original proposal, a lot of space is essentially empty to reduce collisions and make look-ups faster. With the new approach, you reduce the memory required by moving the sparseness where it's really required, in the indices.
[1]: I say "insertion ordered" and not "ordered" since, with the existence of OrderedDict, "ordered" suggests further behavior that the `dict` object *doesn't provide*. OrderedDicts are reversible, provide order sensitive methods and, mainly, provide an order-sensive equality tests (`==`, `!=`). `dict`s currently don't offer any of those behaviors/methods.
[2]: The new dictionary implementations performs better **memory wise** by being designed more compactly; that's the main benefit here. Speed wise, the difference isn't so drastic, there's places where the new dict might introduce slight regressions (key-lookups, for example) while in others (iteration and resizing come to mind) a performance boost should be present.
Overall, the performance of the dictionary, especially in real-life situations, improves due to the compactness introduced.
A:
Below is answering the original first question:
Should I use dict or OrderedDict in Python 3.6?
I think this sentence from the documentation is actually enough to answer your question
The order-preserving aspect of this new implementation is considered an implementation detail and should not be relied upon
dict is not explicitly meant to be an ordered collection, so if you want to stay consistent and not rely on a side effect of the new implementation you should stick with OrderedDict.
Make your code future proof :)
There's a debate about that here.
EDIT: Python 3.7 will keep this as a feature see
A:
Update:
Guido van Rossum announced on the mailing list that as of Python 3.7 dicts in all Python implementations must preserve insertion order.
A:
I wanted to add to the discussion above but don't have the reputation to comment.
Python 3.8 includes the reversed() function on dictionaries (removing another difference from OrderedDict.
Dict and dictviews are now iterable in reversed insertion order using reversed(). (Contributed by Rémi Lapeyre in bpo-33462.)
See what's new in python 3.8
I don't see any mention of the equality operator or other features of OrderedDict so they are still not entirely the same.
A:
To fully answer this question in 2020, let me quote several statements from official Python docs:
Changed in version 3.7: Dictionary order is guaranteed to be insertion order. This behavior was an implementation detail of CPython from 3.6.
Changed in version 3.7: Dictionary order is guaranteed to be insertion order.
Changed in version 3.8: Dictionaries are now reversible.
Dictionaries and dictionary views are reversible.
A statement regarding OrderedDict vs Dict:
Ordered dictionaries are just like regular dictionaries but have some extra capabilities relating to ordering operations. They have become less important now that the built-in dict class gained the ability to remember insertion order (this new behavior became guaranteed in Python 3.7).
A:
Changed in version 3.7: Dictionary order is guaranteed to be insertion order. This behavior was an implementation detail of CPython from 3.6.
|
Are dictionaries ordered in Python 3.6+?
|
Dictionaries are insertion ordered as of Python 3.6. It is described as a CPython implementation detail rather than a language feature. The documentation states:
dict() now uses a “compact” representation pioneered by PyPy. The memory usage of the new dict() is between 20% and 25% smaller compared to Python 3.5. PEP 468 (Preserving the order of **kwargs in a function.) is implemented by this. The order-preserving aspect of this new implementation is considered an implementation detail and should not be relied upon (this may change in the future, but it is desired to have this new dict implementation in the language for a few releases before changing the language spec to mandate order-preserving semantics for all current and future Python implementations; this also helps preserve backwards-compatibility with older versions of the language where random iteration order is still in effect, e.g. Python 3.5). (Contributed by INADA Naoki in issue 27350. Idea originally suggested by Raymond Hettinger.)
How does the new dictionary implementation perform better than the older one while preserving element order?
Update December 2017: dicts retaining insertion order is guaranteed for Python 3.7
|
[
"\nAre dictionaries ordered in Python 3.6+?\n\nThey are insertion ordered[1].\nAs of Python 3.6, for the CPython implementation of Python, dictionaries remember the order of items inserted. This is considered an implementation detail in Python 3.6; you need to use OrderedDict if you want insertion ordering that's guaranteed across other implementations of Python (and other ordered behavior[1]).\nAs of Python 3.7, this is a guaranteed language feature, not merely an implementation detail. From a python-dev message by GvR:\n\nMake it so. \"Dict keeps insertion order\" is the ruling. Thanks!\n\nThis simply means that you can depend on it. Other implementations of Python must also offer an insertion ordered dictionary if they wish to be a conforming implementation of Python 3.7.\n\n\nHow does the Python 3.6 dictionary implementation perform better[2] than the older one while preserving element order?\n\nEssentially, by keeping two arrays.\n\nThe first array, dk_entries, holds the entries (of type PyDictKeyEntry) for the dictionary in the order that they were inserted. Preserving order is achieved by this being an append only array where new items are always inserted at the end (insertion order).\n\nThe second, dk_indices, holds the indices for the dk_entries array (that is, values that indicate the position of the corresponding entry in dk_entries). This array acts as the hash table. When a key is hashed it leads to one of the indices stored in dk_indices and the corresponding entry is fetched by indexing dk_entries. Since only indices are kept, the type of this array depends on the overall size of the dictionary (ranging from type int8_t(1 byte) to int32_t/int64_t (4/8 bytes) on 32/64 bit builds)\n\n\nIn the previous implementation, a sparse array of type PyDictKeyEntry and size dk_size had to be allocated; unfortunately, it also resulted in a lot of empty space since that array was not allowed to be more than 2/3 * dk_size full for performance reasons. (and the empty space still had PyDictKeyEntry size!).\nThis is not the case now since only the required entries are stored (those that have been inserted) and a sparse array of type intX_t (X depending on dict size) 2/3 * dk_sizes full is kept. The empty space changed from type PyDictKeyEntry to intX_t.\nSo, obviously, creating a sparse array of type PyDictKeyEntry is much more memory demanding than a sparse array for storing ints.\nYou can see the full conversation on Python-Dev regarding this feature if interested, it is a good read.\n\nIn the original proposal made by Raymond Hettinger, a visualization of the data structures used can be seen which captures the gist of the idea.\n\nFor example, the dictionary:\nd = {'timmy': 'red', 'barry': 'green', 'guido': 'blue'}\n\nis currently stored as [keyhash, key, value]:\nentries = [['--', '--', '--'],\n [-8522787127447073495, 'barry', 'green'],\n ['--', '--', '--'],\n ['--', '--', '--'],\n ['--', '--', '--'],\n [-9092791511155847987, 'timmy', 'red'],\n ['--', '--', '--'],\n [-6480567542315338377, 'guido', 'blue']]\n\nInstead, the data should be organized as follows:\nindices = [None, 1, None, None, None, 0, None, 2]\nentries = [[-9092791511155847987, 'timmy', 'red'],\n [-8522787127447073495, 'barry', 'green'],\n [-6480567542315338377, 'guido', 'blue']]\n\n\nAs you can visually now see, in the original proposal, a lot of space is essentially empty to reduce collisions and make look-ups faster. With the new approach, you reduce the memory required by moving the sparseness where it's really required, in the indices.\n\n\n[1]: I say \"insertion ordered\" and not \"ordered\" since, with the existence of OrderedDict, \"ordered\" suggests further behavior that the `dict` object *doesn't provide*. OrderedDicts are reversible, provide order sensitive methods and, mainly, provide an order-sensive equality tests (`==`, `!=`). `dict`s currently don't offer any of those behaviors/methods.\n\n\n\n[2]: The new dictionary implementations performs better **memory wise** by being designed more compactly; that's the main benefit here. Speed wise, the difference isn't so drastic, there's places where the new dict might introduce slight regressions (key-lookups, for example) while in others (iteration and resizing come to mind) a performance boost should be present. \n\n\nOverall, the performance of the dictionary, especially in real-life situations, improves due to the compactness introduced. \n\n",
"Below is answering the original first question:\n\nShould I use dict or OrderedDict in Python 3.6?\n\nI think this sentence from the documentation is actually enough to answer your question\n\nThe order-preserving aspect of this new implementation is considered an implementation detail and should not be relied upon\n\ndict is not explicitly meant to be an ordered collection, so if you want to stay consistent and not rely on a side effect of the new implementation you should stick with OrderedDict.\nMake your code future proof :)\nThere's a debate about that here.\nEDIT: Python 3.7 will keep this as a feature see\n",
"Update:\nGuido van Rossum announced on the mailing list that as of Python 3.7 dicts in all Python implementations must preserve insertion order.\n",
"I wanted to add to the discussion above but don't have the reputation to comment.\nPython 3.8 includes the reversed() function on dictionaries (removing another difference from OrderedDict.\n\nDict and dictviews are now iterable in reversed insertion order using reversed(). (Contributed by Rémi Lapeyre in bpo-33462.)\nSee what's new in python 3.8\n\nI don't see any mention of the equality operator or other features of OrderedDict so they are still not entirely the same.\n",
"To fully answer this question in 2020, let me quote several statements from official Python docs:\n\nChanged in version 3.7: Dictionary order is guaranteed to be insertion order. This behavior was an implementation detail of CPython from 3.6.\n\n\nChanged in version 3.7: Dictionary order is guaranteed to be insertion order.\n\n\nChanged in version 3.8: Dictionaries are now reversible.\n\n\nDictionaries and dictionary views are reversible.\n\nA statement regarding OrderedDict vs Dict:\n\nOrdered dictionaries are just like regular dictionaries but have some extra capabilities relating to ordering operations. They have become less important now that the built-in dict class gained the ability to remember insertion order (this new behavior became guaranteed in Python 3.7).\n\n",
"Changed in version 3.7: Dictionary order is guaranteed to be insertion order. This behavior was an implementation detail of CPython from 3.6.\n"
] |
[
774,
82,
36,
21,
12,
0
] |
[] |
[] |
[
"dictionary",
"python",
"python_3.6",
"python_3.x",
"python_internals"
] |
stackoverflow_0039980323_dictionary_python_python_3.6_python_3.x_python_internals.txt
|
Q:
'DataFrame' object has no attribute 'plt'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#loding data
file=pd.read_csv("students_scoure.csv")
# print(file.shape)
# print(file.head())
# print(file.describe())
#plot the data
file.plt(x='Hours',y='Scores',style='o')
plt.show()
and i am getting error:
5902 return object.__getattribute__(self, name)
AttributeError: 'DataFrame' object has no attribute 'plt'
How can I correct this mistake?
A:
matplotlib.pyplot:
matplotlib.pyplot is a collection of functions that make matplotlib work like MATLAB. Each pyplot function makes some change to a figure: e.g., creates a figure, creates a plotting area in a figure, plots some lines in a plotting area, decorates the plot with labels, etc.
You can either use the df directly to plot (pandas.DataFrame.plot which uses the matplotlib backend):
file.plot(x='Hours',y='Scores',style='o')
or:
plt.plot(file['Hours'], file['Scores'], 'o')
|
'DataFrame' object has no attribute 'plt'
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#loding data
file=pd.read_csv("students_scoure.csv")
# print(file.shape)
# print(file.head())
# print(file.describe())
#plot the data
file.plt(x='Hours',y='Scores',style='o')
plt.show()
and i am getting error:
5902 return object.__getattribute__(self, name)
AttributeError: 'DataFrame' object has no attribute 'plt'
How can I correct this mistake?
|
[
"matplotlib.pyplot:\n\nmatplotlib.pyplot is a collection of functions that make matplotlib work like MATLAB. Each pyplot function makes some change to a figure: e.g., creates a figure, creates a plotting area in a figure, plots some lines in a plotting area, decorates the plot with labels, etc.\n\nYou can either use the df directly to plot (pandas.DataFrame.plot which uses the matplotlib backend):\nfile.plot(x='Hours',y='Scores',style='o')\n\nor:\nplt.plot(file['Hours'], file['Scores'], 'o')\n\n"
] |
[
0
] |
[] |
[] |
[
"matplotlib",
"pandas",
"python"
] |
stackoverflow_0074581022_matplotlib_pandas_python.txt
|
Q:
How to get click position on QPixmap
There is a QLabel:
self.image_label = QtGui.QLabel(self.centralwidget)
self.image_label.setSizePolicy(sizePolicy)
to which I put QPixmap (generated dynamically):
pixmap = QtGui.QPixmap(os.getcwd() + '\\deafult_title.png')
self.image_label.setPixmap(pixmap)
How to get xy coordinates of click, but in respect to image top-left corner?
I know how to get position on label:
self.image_label.mousePressEvent = self.map_clicked
But label has some margin that changes when I move window.
I have also tried aligning QPixmap in label:
self.image_label.setAlignment(QtCore.Qt.AlignLeft)
And now there is constant offset in x,y position, but I not sure if this is the best way to do this.
Is there some easy way to get click position in image coordination system?
A:
I know it's been a while but i found a solution by calculation the mouse click coordinates relative to the QPixmap object.
label = QLabel(...)
img_pix = QPixmap(...)
label.setPixmap(img_pix)
# now you can get mouse click coordinates on the label by overriding `label.mousePressEvent`
# assuming we have the mouse click coordinates
coord_x = ...
coord_y = ...
# calculating the mouse click coordinates relative to the QPixmap (img_pix)
img_pix_width = img_pix.width()
img_pix_heigth = img_pix.height()
label_width = label.width()
label_height = label.height()
scale_factor_width = label_width / img_pix_width
scale_factor_height = label_height / img_pix_heigth
relative_width_in_img_pix = coord_x / scale_factor_width
relative_height_in_img_pix = coord_y / scale_factor_height
relative_coordinates_in_img_pix = QPoint(relative_width_in_img_pix, relative_height_in_img_pix)
|
How to get click position on QPixmap
|
There is a QLabel:
self.image_label = QtGui.QLabel(self.centralwidget)
self.image_label.setSizePolicy(sizePolicy)
to which I put QPixmap (generated dynamically):
pixmap = QtGui.QPixmap(os.getcwd() + '\\deafult_title.png')
self.image_label.setPixmap(pixmap)
How to get xy coordinates of click, but in respect to image top-left corner?
I know how to get position on label:
self.image_label.mousePressEvent = self.map_clicked
But label has some margin that changes when I move window.
I have also tried aligning QPixmap in label:
self.image_label.setAlignment(QtCore.Qt.AlignLeft)
And now there is constant offset in x,y position, but I not sure if this is the best way to do this.
Is there some easy way to get click position in image coordination system?
|
[
"I know it's been a while but i found a solution by calculation the mouse click coordinates relative to the QPixmap object.\nlabel = QLabel(...)\nimg_pix = QPixmap(...)\nlabel.setPixmap(img_pix)\n\n# now you can get mouse click coordinates on the label by overriding `label.mousePressEvent`\n\n# assuming we have the mouse click coordinates\ncoord_x = ...\ncoord_y = ...\n\n# calculating the mouse click coordinates relative to the QPixmap (img_pix)\nimg_pix_width = img_pix.width()\nimg_pix_heigth = img_pix.height()\n\nlabel_width = label.width()\nlabel_height = label.height()\n\nscale_factor_width = label_width / img_pix_width\nscale_factor_height = label_height / img_pix_heigth\n\nrelative_width_in_img_pix = coord_x / scale_factor_width \nrelative_height_in_img_pix = coord_y / scale_factor_height\n\nrelative_coordinates_in_img_pix = QPoint(relative_width_in_img_pix, relative_height_in_img_pix)\n\n"
] |
[
0
] |
[] |
[] |
[
"pyqt",
"pyqt4",
"python",
"python_2.7",
"qt"
] |
stackoverflow_0035507127_pyqt_pyqt4_python_python_2.7_qt.txt
|
Q:
capture video stream from a website to Flutter App
i am trying to build an app that runs video stream after performing some image processing in python. which after processing lives it on a site.
from flask import Flask,render_template,Response
import string
from datetime import datetime
from datetime import date
import cv2
import os
import ctypes # An included library with Python install.
cascPath=os.path.dirname(cv2.__file__)+"/data/haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascPath)
app=Flask(__name__)
def generate_frames():
posx=0
posy=0
video_capture = cv2.VideoCapture(0)
while True:
# Capture frame-by-frame
ret, frames = video_capture.read()
gray = cv2.cvtColor(frames, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(200, 200),
flags=cv2.CASCADE_SCALE_IMAGE
)
# Draw a rectangle around the faces
for (x, y, w, h) in faces:
rec=cv2.rectangle(frames, (x, y), (x+w, y+h), (0, 255, 0), 1)
posy=y
posx=x
cv2.line(img=frames, pt1=(100, 0), pt2=(100, 1000), color=(0, 255, 0), thickness=5,
lineType=8, shift=0)
if (posx<(100) and posx!=0 and posy <1000 and posy!=0 ):
s="Collision Detected at x {} and y {}"
ctypes.windll.user32.MessageBoxW(0,s.format(posx,posy), "Collision Detected", 1)
now = datetime.now()
#9:17:45.44343
today = date.today()
current_time = now.strftime("%H-%M-%S")
str="{} {} Capture.jpg"
sk=str.format(today,current_time)
cv2.imwrite(sk, frames)
print("capture saved at ",sk)
ret,buffer=cv2.imencode('.jpg',frames)
frame=buffer.tobytes()
if cv2.waitKey(1) & 0xFF == ord('q'):
break
yield(b'--frame\r\n'b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
cv2.imshow('Video', frames)
@app.route('/')
def index():
return render_template('index.html')
@app.route('/video')
def video():
return Response(generate_frames(),mimetype='multipart/x-mixed-replace; boundary=frame')
if __name__=="__main__":
app.run(host='0.0.0.0', port=8080)
using command:
<img class=".img-fluid" src="{{ url_for('video') }}"
width="1200"height="700">
i am trying to figure out a way to access it on flutter. video_player extension doesn't seem to help as my stream is based on continuous stream of images.
A:
Video Stream Coming from RTSP Protocol can easily be Streamed on to Flutter using Flutter VLC Player. So you don't need to integrate it with Python Server.
Just Add Link in the Controller:
_videoPlayerController = VlcPlayerController.network(
'rtsp://your Link',
hwAcc: HwAcc.FULL,
autoPlay: false,
options: VlcPlayerOptions(),
);
|
capture video stream from a website to Flutter App
|
i am trying to build an app that runs video stream after performing some image processing in python. which after processing lives it on a site.
from flask import Flask,render_template,Response
import string
from datetime import datetime
from datetime import date
import cv2
import os
import ctypes # An included library with Python install.
cascPath=os.path.dirname(cv2.__file__)+"/data/haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascPath)
app=Flask(__name__)
def generate_frames():
posx=0
posy=0
video_capture = cv2.VideoCapture(0)
while True:
# Capture frame-by-frame
ret, frames = video_capture.read()
gray = cv2.cvtColor(frames, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(200, 200),
flags=cv2.CASCADE_SCALE_IMAGE
)
# Draw a rectangle around the faces
for (x, y, w, h) in faces:
rec=cv2.rectangle(frames, (x, y), (x+w, y+h), (0, 255, 0), 1)
posy=y
posx=x
cv2.line(img=frames, pt1=(100, 0), pt2=(100, 1000), color=(0, 255, 0), thickness=5,
lineType=8, shift=0)
if (posx<(100) and posx!=0 and posy <1000 and posy!=0 ):
s="Collision Detected at x {} and y {}"
ctypes.windll.user32.MessageBoxW(0,s.format(posx,posy), "Collision Detected", 1)
now = datetime.now()
#9:17:45.44343
today = date.today()
current_time = now.strftime("%H-%M-%S")
str="{} {} Capture.jpg"
sk=str.format(today,current_time)
cv2.imwrite(sk, frames)
print("capture saved at ",sk)
ret,buffer=cv2.imencode('.jpg',frames)
frame=buffer.tobytes()
if cv2.waitKey(1) & 0xFF == ord('q'):
break
yield(b'--frame\r\n'b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
cv2.imshow('Video', frames)
@app.route('/')
def index():
return render_template('index.html')
@app.route('/video')
def video():
return Response(generate_frames(),mimetype='multipart/x-mixed-replace; boundary=frame')
if __name__=="__main__":
app.run(host='0.0.0.0', port=8080)
using command:
<img class=".img-fluid" src="{{ url_for('video') }}"
width="1200"height="700">
i am trying to figure out a way to access it on flutter. video_player extension doesn't seem to help as my stream is based on continuous stream of images.
|
[
"Video Stream Coming from RTSP Protocol can easily be Streamed on to Flutter using Flutter VLC Player. So you don't need to integrate it with Python Server.\nJust Add Link in the Controller:\n _videoPlayerController = VlcPlayerController.network(\n 'rtsp://your Link',\n hwAcc: HwAcc.FULL,\n autoPlay: false,\n options: VlcPlayerOptions(),\n );\n\n"
] |
[
0
] |
[] |
[] |
[
"flutter",
"live",
"python",
"video_streaming"
] |
stackoverflow_0071920292_flutter_live_python_video_streaming.txt
|
Q:
Pandas rolling window selection based on a condition and calculate
How can I calculate rolling window mean based on a condition?
Need to calculate rolling window mean where for each index, I capture coordinate difference within a range < 400.
I need to add this as a new column.
e.g.
at Index
cg13869341 = mean(cg13869341, cg14008030)
cg14008030 = mean(cg13869341, cg14008030)
cg14008031 = mean(cg13869341)
...
cg14008033 = mean(cg14008031,cg40826798, cg14008034, cg40826792)
....
cg40826792 = mean(cg60826792, cg47454306, cg14008034, cg14008033, cg40826792)
Example dataset
Index coordinate rolling_mean
cg13869341 100
cg14008030 200
cg14008031 800
cg40826798 900
cg14008033 1000
cg14008034 1050
cg40826792 1250
cg47454306 1500
A:
With the dataframe you provided:
import pandas as pd
df = pd.DataFrame(
{
"index": [
"cg13869341",
"cg14008030",
"cg14008031",
"cg40826798",
"cg14008033",
"cg14008034",
"cg40826792",
"cg47454306",
],
"coordinate": [100, 200, 800, 900, 1000, 1050, 1250, 1500],
}
)
Here is one way to do it using Pandas apply:
df["rolling_mean"] = df.apply(
lambda x: df.loc[
(df["coordinate"] >= x["coordinate"] - 400)
& (df["coordinate"] <= x["coordinate"] + 400),
"coordinate",
].mean(),
axis=1,
)
Then:
print(df)
# Output
index coordinate rolling_mean
0 cg13869341 100 150.0
1 cg14008030 200 150.0
2 cg14008031 800 937.5
3 cg40826798 900 1000.0
4 cg14008033 1000 1000.0
5 cg14008034 1050 1000.0
6 cg40826792 1250 1140.0
7 cg47454306 1500 1375.0
|
Pandas rolling window selection based on a condition and calculate
|
How can I calculate rolling window mean based on a condition?
Need to calculate rolling window mean where for each index, I capture coordinate difference within a range < 400.
I need to add this as a new column.
e.g.
at Index
cg13869341 = mean(cg13869341, cg14008030)
cg14008030 = mean(cg13869341, cg14008030)
cg14008031 = mean(cg13869341)
...
cg14008033 = mean(cg14008031,cg40826798, cg14008034, cg40826792)
....
cg40826792 = mean(cg60826792, cg47454306, cg14008034, cg14008033, cg40826792)
Example dataset
Index coordinate rolling_mean
cg13869341 100
cg14008030 200
cg14008031 800
cg40826798 900
cg14008033 1000
cg14008034 1050
cg40826792 1250
cg47454306 1500
|
[
"With the dataframe you provided:\nimport pandas as pd\n\ndf = pd.DataFrame(\n {\n \"index\": [\n \"cg13869341\",\n \"cg14008030\",\n \"cg14008031\",\n \"cg40826798\",\n \"cg14008033\",\n \"cg14008034\",\n \"cg40826792\",\n \"cg47454306\",\n ],\n \"coordinate\": [100, 200, 800, 900, 1000, 1050, 1250, 1500],\n }\n)\n\nHere is one way to do it using Pandas apply:\ndf[\"rolling_mean\"] = df.apply(\n lambda x: df.loc[\n (df[\"coordinate\"] >= x[\"coordinate\"] - 400)\n & (df[\"coordinate\"] <= x[\"coordinate\"] + 400),\n \"coordinate\",\n ].mean(),\n axis=1,\n)\n\nThen:\nprint(df)\n# Output\n index coordinate rolling_mean\n0 cg13869341 100 150.0\n1 cg14008030 200 150.0\n2 cg14008031 800 937.5\n3 cg40826798 900 1000.0\n4 cg14008033 1000 1000.0\n5 cg14008034 1050 1000.0\n6 cg40826792 1250 1140.0\n7 cg47454306 1500 1375.0\n\n"
] |
[
0
] |
[] |
[] |
[
"mean",
"pandas",
"python",
"rolling_computation"
] |
stackoverflow_0074558364_mean_pandas_python_rolling_computation.txt
|
Q:
java.lang.NoClassDefFoundError: scala/Product$class using read function from PySpark
I'm new to PySpark, and I'm just trying to read a table from my redshift bank.
The code looks like the following:
import findspark
findspark.add_packages("io.github.spark-redshift-community:spark-redshift_2.11:4.0.1")
findspark.init()
spark = SparkSession.builder.appName("Dim_Customer").getOrCreate()
df_read_1 = spark.read \
.format("io.github.spark_redshift_community.spark.redshift") \
.option("url", "jdbc:redshift://fake_ip:5439/fake_database?user=fake_user&password=fake_password") \
.option("dbtable", "dim_customer") \
.option("tempdir", "https://bucket-name.s3.region-code.amazonaws.com/") \
.load()
I'm getting the error: java.lang.NoClassDefFoundError: scala/Product$class
I'm using Spark version 3.2.2 with Python 3.9.7
Could someone help me, please?
Thank you in advance!
A:
You're using wrong version of the spark-redshift connector - your version is for Spark 2.4 that uses Scala 2.11, while you need version for Spark 3 that uses Scala 2.12 - change version to 5.1.0 that was released recently (all released versions are listed here)
|
java.lang.NoClassDefFoundError: scala/Product$class using read function from PySpark
|
I'm new to PySpark, and I'm just trying to read a table from my redshift bank.
The code looks like the following:
import findspark
findspark.add_packages("io.github.spark-redshift-community:spark-redshift_2.11:4.0.1")
findspark.init()
spark = SparkSession.builder.appName("Dim_Customer").getOrCreate()
df_read_1 = spark.read \
.format("io.github.spark_redshift_community.spark.redshift") \
.option("url", "jdbc:redshift://fake_ip:5439/fake_database?user=fake_user&password=fake_password") \
.option("dbtable", "dim_customer") \
.option("tempdir", "https://bucket-name.s3.region-code.amazonaws.com/") \
.load()
I'm getting the error: java.lang.NoClassDefFoundError: scala/Product$class
I'm using Spark version 3.2.2 with Python 3.9.7
Could someone help me, please?
Thank you in advance!
|
[
"You're using wrong version of the spark-redshift connector - your version is for Spark 2.4 that uses Scala 2.11, while you need version for Spark 3 that uses Scala 2.12 - change version to 5.1.0 that was released recently (all released versions are listed here)\n"
] |
[
0
] |
[] |
[] |
[
"amazon_redshift",
"amazon_s3",
"apache_spark",
"pyspark",
"python"
] |
stackoverflow_0074578273_amazon_redshift_amazon_s3_apache_spark_pyspark_python.txt
|
Q:
Tensorflow calculate hessian of model weights in a batch
I am replicating a paper. I have a basic Keras CNN model for MNIST classification. Now for sample z in the training, I want to calculate the hessian matrix of the model parameters with respect to the loss of that sample. I want to average out this hessian over the training data (n is number of training data).
My final goal is to calculate this value (the influence score):
I can calculate the left term and the right term and want to compute the Hessian term. I don't know how to calculate hessian for the model weights for a batch of examples (vectorization). I was able to calculate it only for a sample at a time which is too slow.
x=tf.convert_to_tensor(x_train[0:13])
with tf.GradientTape() as t2:
with tf.GradientTape() as t1:
y=model(x)
mce = tf.keras.losses.CategoricalCrossentropy()
y_expanded=y_train[train_idx]
loss=mce(y_expanded,y)
g = t1.gradient(loss, model.weights[4])
h = t2.jacobian(g, model.weights[4])
print(h.shape)
For clarification, if a model layer is of dimension 20*30, I want to feed a batch of 13 samples to it and get a Hessian of dimension (13,20,30,20,30). Now I can only get Hessian of dimension (20,30,20,30) which thwarts the vectorization (the code above).
This thread has the same problem, except that I want the second-order derivative rather than the first-order.
I also tried the below script which returns a (13,20,30,20,30) matrix that satisfies the dimension, but when I manually checked the sum of this matrix with the sum of 13 single hessian calculations with a for loop from 0 to 12, they lead to different numbers so it does not work either since I expected equal values.
x=tf.convert_to_tensor(x_train[0:13])
mce = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
with tf.GradientTape() as t2:
with tf.GradientTape() as t1:
t1.watch(model.weights[4])
y_expanded=y_train[0:13]
y=model(x)
loss=mce(y_expanded,y)
j1=t1.jacobian(loss, model.weights[4])
j3 = t2.jacobian(j1, model.weights[4])
print(j3.shape)
A:
That's how hessians are defined, you can only calculate a hessian of a scalar function.
But nothing new here, the same happens with gradients, and what is done to handle batches is to accumulate the gradients, something similar can be done with the hessian.
If you know how to compute the hessian of the loss, it means you could define batch cost and still be able to compute the hessian with the same method. e.g. you could define your cost as the sum(losses) where losses is the vector of losses for all examples in the batch.
A:
Let's Suppose you have a model and you wanna train the model weights by taking the Hessian of the training images w.r.t trainable-weights
#Import the libraries we need
import tensorflow as tf
from tensorflow.python.eager import forwardprop
model = tf.keras.models.load_model('model.h5')
#Define the Adam Optimizer
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.98,
epsilon=1e-9)
#Define the loss function
def loss_function(y_true , y_pred):
return tf.keras.losses.sparse_categorical_crossentropy(y_true , y_pred , from_logits=True)
#Define the Accuracy metric function
def accuracy_function(y_true , y_pred):
return tf.keras.metrics.sparse_categorical_accuracy(y_true , y_pred)
Now, define the variables for storing the mean of the loss and accuracy
train_loss = tf.keras.metrics.Mean(name='loss')
train_accuracy = tf.keras.metrics.Mean(name='accuracy')
#Now compute the Hessian in some different style for better efficiency of the model
vector = [tf.ones_like(v) for v in model.trainable_variables]
def _forward_over_back_hvp(images, labels):
with forwardprop.ForwardAccumulator(model.trainable_variables, vector) as acc:
with tf.GradientTape() as grad_tape:
logits = model(images, training=True)
loss = loss_function(labels ,logits)
grads = grad_tape.gradient(loss, model.trainable_variables)
hessian = acc.jvp(grads)
optimizer.apply_gradients(zip(hessian, model.trainable_variables))
train_loss(loss) #keep adding the loss
train_accuracy(accuracy_function(labels, logits)) #Keep adding the accuracy
#Now, here we need to call the function and train it
import time
for epoch in range(20):
start = time.time()
train_loss.reset_states()
train_accuracy.reset_states()
for i,(x , y) in enumerate(dataset):
_forward_over_back_hvp(x , y)
if(i%50==0):
print(f'Epoch {epoch + 1} Loss {train_loss.result():.4f} Accuracy {train_accuracy.result():.4f}')
print(f'Time taken for 1 epoch: {time.time() - start:.2f} secs\n')
Epoch 1 Loss 2.6396 Accuracy 0.1250
Time is taken for 1 epoch: 0.23 secs
|
Tensorflow calculate hessian of model weights in a batch
|
I am replicating a paper. I have a basic Keras CNN model for MNIST classification. Now for sample z in the training, I want to calculate the hessian matrix of the model parameters with respect to the loss of that sample. I want to average out this hessian over the training data (n is number of training data).
My final goal is to calculate this value (the influence score):
I can calculate the left term and the right term and want to compute the Hessian term. I don't know how to calculate hessian for the model weights for a batch of examples (vectorization). I was able to calculate it only for a sample at a time which is too slow.
x=tf.convert_to_tensor(x_train[0:13])
with tf.GradientTape() as t2:
with tf.GradientTape() as t1:
y=model(x)
mce = tf.keras.losses.CategoricalCrossentropy()
y_expanded=y_train[train_idx]
loss=mce(y_expanded,y)
g = t1.gradient(loss, model.weights[4])
h = t2.jacobian(g, model.weights[4])
print(h.shape)
For clarification, if a model layer is of dimension 20*30, I want to feed a batch of 13 samples to it and get a Hessian of dimension (13,20,30,20,30). Now I can only get Hessian of dimension (20,30,20,30) which thwarts the vectorization (the code above).
This thread has the same problem, except that I want the second-order derivative rather than the first-order.
I also tried the below script which returns a (13,20,30,20,30) matrix that satisfies the dimension, but when I manually checked the sum of this matrix with the sum of 13 single hessian calculations with a for loop from 0 to 12, they lead to different numbers so it does not work either since I expected equal values.
x=tf.convert_to_tensor(x_train[0:13])
mce = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
with tf.GradientTape() as t2:
with tf.GradientTape() as t1:
t1.watch(model.weights[4])
y_expanded=y_train[0:13]
y=model(x)
loss=mce(y_expanded,y)
j1=t1.jacobian(loss, model.weights[4])
j3 = t2.jacobian(j1, model.weights[4])
print(j3.shape)
|
[
"That's how hessians are defined, you can only calculate a hessian of a scalar function.\nBut nothing new here, the same happens with gradients, and what is done to handle batches is to accumulate the gradients, something similar can be done with the hessian.\nIf you know how to compute the hessian of the loss, it means you could define batch cost and still be able to compute the hessian with the same method. e.g. you could define your cost as the sum(losses) where losses is the vector of losses for all examples in the batch.\n",
"Let's Suppose you have a model and you wanna train the model weights by taking the Hessian of the training images w.r.t trainable-weights\n#Import the libraries we need\nimport tensorflow as tf\nfrom tensorflow.python.eager import forwardprop\n\nmodel = tf.keras.models.load_model('model.h5')\n\n#Define the Adam Optimizer\noptimizer = tf.keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.98,\n epsilon=1e-9)\n\n#Define the loss function\ndef loss_function(y_true , y_pred):\n return tf.keras.losses.sparse_categorical_crossentropy(y_true , y_pred , from_logits=True)\n\n#Define the Accuracy metric function\ndef accuracy_function(y_true , y_pred):\n return tf.keras.metrics.sparse_categorical_accuracy(y_true , y_pred)\n\nNow, define the variables for storing the mean of the loss and accuracy\ntrain_loss = tf.keras.metrics.Mean(name='loss')\ntrain_accuracy = tf.keras.metrics.Mean(name='accuracy')\n\n#Now compute the Hessian in some different style for better efficiency of the model\nvector = [tf.ones_like(v) for v in model.trainable_variables]\n\ndef _forward_over_back_hvp(images, labels):\n \n with forwardprop.ForwardAccumulator(model.trainable_variables, vector) as acc:\n with tf.GradientTape() as grad_tape:\n logits = model(images, training=True)\n loss = loss_function(labels ,logits)\n grads = grad_tape.gradient(loss, model.trainable_variables)\n hessian = acc.jvp(grads)\n \n optimizer.apply_gradients(zip(hessian, model.trainable_variables))\n \n train_loss(loss) #keep adding the loss\n train_accuracy(accuracy_function(labels, logits)) #Keep adding the accuracy\n\n#Now, here we need to call the function and train it\nimport time\nfor epoch in range(20):\n start = time.time()\n\n train_loss.reset_states()\n train_accuracy.reset_states()\n\n for i,(x , y) in enumerate(dataset):\n _forward_over_back_hvp(x , y)\n\n if(i%50==0):\n print(f'Epoch {epoch + 1} Loss {train_loss.result():.4f} Accuracy {train_accuracy.result():.4f}')\n\n print(f'Time taken for 1 epoch: {time.time() - start:.2f} secs\\n')\n\nEpoch 1 Loss 2.6396 Accuracy 0.1250\nTime is taken for 1 epoch: 0.23 secs\n\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"keras",
"machine_learning",
"python",
"tensorflow",
"vectorization"
] |
stackoverflow_0074454228_keras_machine_learning_python_tensorflow_vectorization.txt
|
Q:
No module named 'tensorflow.keras' ModuleNotFoundError:
My system information :
Windows version : 11
Python version : 3.10.7
Tensorflow : 2.11.0
pip : 22.3.1
I have checked the previous questions which are similar to mine but they didn't help.
ModuleNotFoundError: No module named 'tensorflow.keras'
Traceback Error: ModuleNotFoundError: No module named 'tensorflow.keras'
I prefer to use virtual env over Conda thats why I am not using tensorflow through Conda.
When I run the below program in Jupyter lab its giving error.
import sys
sys.version
'3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64
bit (AMD64)]'
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow.keras as keras
ModuleNotFoundError Traceback (most recent call last)
Input In [2], in <cell line: 5>()
3 import matplotlib.pyplot as plt
4 import tensorflow as tf
----> 5 import tensorflow.keras as keras
ModuleNotFoundError: No module named 'tensorflow.keras'
How to fix ModuleNotFoundError: No module named 'tensorflow.keras' ?
|
No module named 'tensorflow.keras' ModuleNotFoundError:
|
My system information :
Windows version : 11
Python version : 3.10.7
Tensorflow : 2.11.0
pip : 22.3.1
I have checked the previous questions which are similar to mine but they didn't help.
ModuleNotFoundError: No module named 'tensorflow.keras'
Traceback Error: ModuleNotFoundError: No module named 'tensorflow.keras'
I prefer to use virtual env over Conda thats why I am not using tensorflow through Conda.
When I run the below program in Jupyter lab its giving error.
import sys
sys.version
'3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64
bit (AMD64)]'
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow.keras as keras
ModuleNotFoundError Traceback (most recent call last)
Input In [2], in <cell line: 5>()
3 import matplotlib.pyplot as plt
4 import tensorflow as tf
----> 5 import tensorflow.keras as keras
ModuleNotFoundError: No module named 'tensorflow.keras'
How to fix ModuleNotFoundError: No module named 'tensorflow.keras' ?
|
[] |
[] |
[
"Issue will resolve easily by doing the following steps\n\nFirst you have to check whether you had installed tensorflow in system or not if yes then it will work in jupyter notebook.\n\nBut after installing in system tensorflow it shows this error same then uninstall python version and download 3.9 version and after that again install tensorflow in cmd and install in system not in through virtual environment so jupyter note book you can download and it will be work after importing tensorflow\n\nDelete anaconda its useful but it is upto you so this was the simple steps you should follow and you'll see that the issue has resolved.\n\n\nHope, this will help you\n"
] |
[
-1
] |
[
"jupyter_notebook",
"modulenotfounderror",
"python",
"tensorflow"
] |
stackoverflow_0074580987_jupyter_notebook_modulenotfounderror_python_tensorflow.txt
|
Q:
How should I use docker for multiple scripts when each script is running a different logic?
I have a project in which 2 scripts are generating data (24/7) and sending it to Kafka. At the same time a consumer/s script is consuming the data from Kafka and processing it.
My question is about how should I deploy this application, as I am quite new to docker. I have two ideas in mind, but not sure which should I use (or if any other should be use):
Option 1:
Pros:
Independent containers.
Easier to scale.
Cons:
More difficult to manage.
More use of resources.
Option 2:
Pros:
Less use of resources.
Cons:
More difficult to scale (as script 1 and 2 are in the same container).
More use of resources.
P.S: Bonus points if somebody is also able to tell me if keeping the consumption script (Script3) in its own container makes sense if I plan to be able to scale it as the amount of producer increases.
A:
When trying to get something working, a useful maxim is:
Premature optimization is the root of all evil.
The right answer will depend on exactly how the two producer scripts work. But in general, Docker expects containers to run a single service process on a single port. So the 4 container approach is where you should start.
|
How should I use docker for multiple scripts when each script is running a different logic?
|
I have a project in which 2 scripts are generating data (24/7) and sending it to Kafka. At the same time a consumer/s script is consuming the data from Kafka and processing it.
My question is about how should I deploy this application, as I am quite new to docker. I have two ideas in mind, but not sure which should I use (or if any other should be use):
Option 1:
Pros:
Independent containers.
Easier to scale.
Cons:
More difficult to manage.
More use of resources.
Option 2:
Pros:
Less use of resources.
Cons:
More difficult to scale (as script 1 and 2 are in the same container).
More use of resources.
P.S: Bonus points if somebody is also able to tell me if keeping the consumption script (Script3) in its own container makes sense if I plan to be able to scale it as the amount of producer increases.
|
[
"When trying to get something working, a useful maxim is:\n\nPremature optimization is the root of all evil.\n\nThe right answer will depend on exactly how the two producer scripts work. But in general, Docker expects containers to run a single service process on a single port. So the 4 container approach is where you should start.\n"
] |
[
0
] |
[] |
[] |
[
"deployment",
"docker",
"docker_compose",
"python"
] |
stackoverflow_0074574848_deployment_docker_docker_compose_python.txt
|
Q:
Python 3.9+ Bluetooth on Windows 10
I've found already very similar questions for this problem but I can't figure it out.
I'm trying to connect a TimeBox evo with bluetooth to windows 10 using python with this code:
import socket
serverMACAddress = "11:75:58:ce:c7:52"
port = 4
print("Start")
s = socket.socket(socket.AF_BLUETOOTH, socket.SOCK_STREAM, socket.BTPROTO_RFCOMM)
s.connect((serverMACAddress,port))
while 1:
text = input()
if text == "quit":
break
s.send(bytes(text, 'UTF-8'))
s.close()
and I get this error:
OSError: [WinError 10064]
Altough getting the error, the device connects to the PC but I can't send and receive data using Python.
A:
I had the same issue and it worked with a port value of 1...
|
Python 3.9+ Bluetooth on Windows 10
|
I've found already very similar questions for this problem but I can't figure it out.
I'm trying to connect a TimeBox evo with bluetooth to windows 10 using python with this code:
import socket
serverMACAddress = "11:75:58:ce:c7:52"
port = 4
print("Start")
s = socket.socket(socket.AF_BLUETOOTH, socket.SOCK_STREAM, socket.BTPROTO_RFCOMM)
s.connect((serverMACAddress,port))
while 1:
text = input()
if text == "quit":
break
s.send(bytes(text, 'UTF-8'))
s.close()
and I get this error:
OSError: [WinError 10064]
Altough getting the error, the device connects to the PC but I can't send and receive data using Python.
|
[
"I had the same issue and it worked with a port value of 1...\n"
] |
[
0
] |
[] |
[] |
[
"bluetooth",
"python",
"sockets"
] |
stackoverflow_0073113252_bluetooth_python_sockets.txt
|
Q:
Unable to parse span tag using python Selenium
I am unable to parse the date in the form of " 2022-11-26 "
Used css selector and xpath but could parse the "2022-" in the span tag
Can you please advise me on the same?
<div class="medium-widget event-widget last">
<div class="shrubbery">
<h2 class="widget-title"><span aria-hidden="true" class="icon-calendar"></span>Upcoming Events</h2>
<p class="give-me-more"><a href="/events/calendars/" title="More Events">More</a></p>
<ul class="menu">
<li>
<time datetime="2022-11-26T00:00:00+00:00"><span class="say-no-more">2022-</span>11-26</time>
<a href="/events/python-events/1331/">De Ja vu</a></li>
<li>
<ul>
I tried to get the year string only but did not get any output
year = driver.find_element(By.CSS_SELECTOR, ".event-widget time span")
A:
Try the below one:
driver.find_element(By.XPATH, ".//time").text
It gives the output as:
2022-11-26
|
Unable to parse span tag using python Selenium
|
I am unable to parse the date in the form of " 2022-11-26 "
Used css selector and xpath but could parse the "2022-" in the span tag
Can you please advise me on the same?
<div class="medium-widget event-widget last">
<div class="shrubbery">
<h2 class="widget-title"><span aria-hidden="true" class="icon-calendar"></span>Upcoming Events</h2>
<p class="give-me-more"><a href="/events/calendars/" title="More Events">More</a></p>
<ul class="menu">
<li>
<time datetime="2022-11-26T00:00:00+00:00"><span class="say-no-more">2022-</span>11-26</time>
<a href="/events/python-events/1331/">De Ja vu</a></li>
<li>
<ul>
I tried to get the year string only but did not get any output
year = driver.find_element(By.CSS_SELECTOR, ".event-widget time span")
|
[
"Try the below one:\ndriver.find_element(By.XPATH, \".//time\").text\n\nIt gives the output as:\n2022-11-26\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"selenium"
] |
stackoverflow_0074577867_python_selenium.txt
|
Q:
Code completion is not working for OpenCV and Python
I am using Ubuntu 14.04. I have installed OpenCV using Adrian Rosebrock's guide. I am also using PyCharm for programming python and opencv.
My problem is that I can use code completion for cv2 modules but code completion wont work for instances initiated from cv2. An example is shown below.
This works:
This does not:
There is no run time error when I write my program as expected. Such that cap.isOpened() works without an error.
A:
Though I am Window user, I also had faced similar problem with you. In my case, I could solve this problem by importing this way:
from cv2 import cv2
As I'm lack of knowledge of how does the python imports module, I can't explain you clearly about why this solve the problem, but it works anyway.
Good luck.
A:
The openCV python module is a dynamically generated wrapper of the underlying c++ library. PyCharm relies on the availability of python source code to provide autocomplete functionality. When the source code is missing (as in the opencv case), pycharm will generate skeleton files with function prototypes and rely on those for autocompletion but with diminished capabilities.
As a result when you autocomplete at
cv2.
it can figure out that the module cv2 has the following members and provide suggestions.
On the other hand when you
cap = cv2.VideoCapture(file_name)
PyCharm can figure out that you just called a method from the cv2 module and assigned it to cap but has no information about the type of the result of this method and does not know where to go look for suggestions for
cap.
If you try the same things in shell mode, you will see the behavior you actually expected to see, since in shell mode will actually introspect live objects (it will ask the created cap object what members it has and provide those as suggestions)
You can also write stubs for the opencv module yourself to enable correct autocompletion in edit mode.
Take a look here
A:
If anyone still is experiencing the issue, downgrading to opencv to 4.5.5.62 helped my case.
A:
I am using PyCharm on windows 10 and faced similar issue on the intellisense for cv2.
This is my solution:
Pycharm>File>Manage IDE settings> Restore Default settings
Restart the Pycharm IDE
Reconfigure Python Interpretor
|
Code completion is not working for OpenCV and Python
|
I am using Ubuntu 14.04. I have installed OpenCV using Adrian Rosebrock's guide. I am also using PyCharm for programming python and opencv.
My problem is that I can use code completion for cv2 modules but code completion wont work for instances initiated from cv2. An example is shown below.
This works:
This does not:
There is no run time error when I write my program as expected. Such that cap.isOpened() works without an error.
|
[
"Though I am Window user, I also had faced similar problem with you. In my case, I could solve this problem by importing this way:\nfrom cv2 import cv2\n\nAs I'm lack of knowledge of how does the python imports module, I can't explain you clearly about why this solve the problem, but it works anyway.\nGood luck.\n",
"The openCV python module is a dynamically generated wrapper of the underlying c++ library. PyCharm relies on the availability of python source code to provide autocomplete functionality. When the source code is missing (as in the opencv case), pycharm will generate skeleton files with function prototypes and rely on those for autocompletion but with diminished capabilities.\nAs a result when you autocomplete at\ncv2.\n\nit can figure out that the module cv2 has the following members and provide suggestions.\nOn the other hand when you\ncap = cv2.VideoCapture(file_name)\n\nPyCharm can figure out that you just called a method from the cv2 module and assigned it to cap but has no information about the type of the result of this method and does not know where to go look for suggestions for\ncap.\n\n\nIf you try the same things in shell mode, you will see the behavior you actually expected to see, since in shell mode will actually introspect live objects (it will ask the created cap object what members it has and provide those as suggestions)\n\nYou can also write stubs for the opencv module yourself to enable correct autocompletion in edit mode.\nTake a look here\n",
"If anyone still is experiencing the issue, downgrading to opencv to 4.5.5.62 helped my case.\n",
"I am using PyCharm on windows 10 and faced similar issue on the intellisense for cv2.\nThis is my solution:\n\nPycharm>File>Manage IDE settings> Restore Default settings\nRestart the Pycharm IDE\nReconfigure Python Interpretor\n\n\n\n"
] |
[
11,
8,
1,
0
] |
[] |
[] |
[
"code_completion",
"intellisense",
"opencv",
"python"
] |
stackoverflow_0043093400_code_completion_intellisense_opencv_python.txt
|
Q:
How to count length of column while some rows have NaN in it?
I have a pandas dataframe. In column I have list. But, some rows NaN. I want to find length of each list, in case it is NaN, I want 0 as length.
My_column
[1, 2]-> should return 2
[] -> should return 0
NaN -> should return 0
Any help?
Thank you.
A:
df['column'].str.len().fillna(0).astype(int)
A:
You can check to see if the item is a list:
If it is a list - identify the length of that list
If it is not a list (eg. np.nan) - then set to zero.
output = [len(x) if isinstance(x, list) else 0 for x in df['column']]
Here is an example using your inputs
import pandas as pd
import numpy as np
df = pd.DataFrame({'column': [['a','b'], np.nan, []]})
output = [len(x) if isinstance(x, list) else 0 for x in df['column']]
print(output)
OUTPUT:
[2, 0, 0]
|
How to count length of column while some rows have NaN in it?
|
I have a pandas dataframe. In column I have list. But, some rows NaN. I want to find length of each list, in case it is NaN, I want 0 as length.
My_column
[1, 2]-> should return 2
[] -> should return 0
NaN -> should return 0
Any help?
Thank you.
|
[
"df['column'].str.len().fillna(0).astype(int)\n\n",
"You can check to see if the item is a list:\n\nIf it is a list - identify the length of that list\nIf it is not a list (eg. np.nan) - then set to zero.\n\noutput = [len(x) if isinstance(x, list) else 0 for x in df['column']]\n\n\n\nHere is an example using your inputs\nimport pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame({'column': [['a','b'], np.nan, []]})\n\noutput = [len(x) if isinstance(x, list) else 0 for x in df['column']]\n\nprint(output)\n\nOUTPUT:\n[2, 0, 0]\n\n"
] |
[
3,
1
] |
[] |
[] |
[
"list",
"pandas",
"python"
] |
stackoverflow_0074579167_list_pandas_python.txt
|
Q:
MinMaxScaler Python Changes original Data
I am trying to have 4 of my 5 csv column to predict the last column.
i used MinMaxScaler to scale my data to 0-1 range,
but at some point when i want to invers_transform it, MinMaxScaler changes my original data. Here is my Code:
dataset = read_csv('zz.csv', header=0, index_col=0)
values = dataset.values
scaler = MinMaxScaler(feature_range=(0, 1))
scaled = scaler.fit_transform(values)
after i split my Scaled data into train_X and train_y i put them into my model and then fitting it:
train = values[:168, :]
test = values[168:, :]
train_X, train_y = train[:, [0,1,3,4]], train[:, 2]
test_X, test_y = test[:, [0,1,3,4]], test[:, 2]
train_X = train_X.reshape((train_X.shape[0], 1, train_X.shape[1]))
test_X = test_X.reshape((test_X.shape[0], 1, test_X.shape[1]))
my model is an LSTM:
# design network
model = Sequential()
model.add(LSTM(4, return_sequences=True, activation="relu", input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(LSTM(16, return_sequences=False, activation="relu"))
model.add(Dense(1))
nadam = tf.keras.optimizers.Nadam(learning_rate=0.0005, beta_1=0.9, beta_2=0.999, epsilon=1e-07)
model.compile(loss='mae', optimizer=nadam, metrics=[tf.keras.metrics.MeanSquaredError()])
stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_mean_squared_error', patience=5)
history = model.fit(train_X, train_y, epochs=2000, verbose=1, validation_split=0.2, shuffle=False, callbacks=[stop_early])
then ill use test_X for prediction , in next line i concatenate my yhat which is my predicted data and my train_y to test_X in order to inverse_transform them. and then make inv_yhat and inv_y for further usages like calculating MSE, MAE etc etc.
# make a prediction
yhat = model.predict(test_X)
yhat = yhat.reshape(yhat.shape[0],1)
test_X = test_X.reshape((test_X.shape[0], test_X.shape[2]))
# invert scaling for forecast
inv_yhat = np.concatenate((yhat, test_X), axis=1)
inv_yhat = scaler.inverse_transform(inv_yhat)
inv_yhat = inv_yhat[:,0]
# invert scaling for actual
test_y = test_y.reshape((len(test_y), 1))
inv_y = np.concatenate((test_y, test_X), axis=1)
inv_y = scaler.inverse_transform(inv_y)
inv_y = inv_y[:,2]
but the problem is that when i use inverse_transform it will change my test_X data to some values that are different from original test_X.
for example this are my first 5 data in test_X :
array([[69.34],
[69.66],
[69.6],
[69.38],
[69.51],
and this are my inv_y which are the same test_X but after inverse_transform :
array([[68.78412 ],
[68.73931 ],
[68.715935],
[68.65166 ],
[68.69646 ],
I've tried also to only fit_transform train_data and transform test_data. but had the same problem
A:
You are scaling your data when your label column is at the second index.
train_X, train_y = train[:, [0,1,3,4]], train[:, 2]
test_X, test_y = test[:, [0,1,3,4]], test[:, 2]
If you inverse scale the label column is at a different position.
inv_yhat = np.concatenate((yhat, test_X), axis=1)
inv_y = np.concatenate((test_y, test_X), axis=1)
You should recheck your feature positions and that the array you want to rescale has the same structure as the original.
|
MinMaxScaler Python Changes original Data
|
I am trying to have 4 of my 5 csv column to predict the last column.
i used MinMaxScaler to scale my data to 0-1 range,
but at some point when i want to invers_transform it, MinMaxScaler changes my original data. Here is my Code:
dataset = read_csv('zz.csv', header=0, index_col=0)
values = dataset.values
scaler = MinMaxScaler(feature_range=(0, 1))
scaled = scaler.fit_transform(values)
after i split my Scaled data into train_X and train_y i put them into my model and then fitting it:
train = values[:168, :]
test = values[168:, :]
train_X, train_y = train[:, [0,1,3,4]], train[:, 2]
test_X, test_y = test[:, [0,1,3,4]], test[:, 2]
train_X = train_X.reshape((train_X.shape[0], 1, train_X.shape[1]))
test_X = test_X.reshape((test_X.shape[0], 1, test_X.shape[1]))
my model is an LSTM:
# design network
model = Sequential()
model.add(LSTM(4, return_sequences=True, activation="relu", input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(LSTM(16, return_sequences=False, activation="relu"))
model.add(Dense(1))
nadam = tf.keras.optimizers.Nadam(learning_rate=0.0005, beta_1=0.9, beta_2=0.999, epsilon=1e-07)
model.compile(loss='mae', optimizer=nadam, metrics=[tf.keras.metrics.MeanSquaredError()])
stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_mean_squared_error', patience=5)
history = model.fit(train_X, train_y, epochs=2000, verbose=1, validation_split=0.2, shuffle=False, callbacks=[stop_early])
then ill use test_X for prediction , in next line i concatenate my yhat which is my predicted data and my train_y to test_X in order to inverse_transform them. and then make inv_yhat and inv_y for further usages like calculating MSE, MAE etc etc.
# make a prediction
yhat = model.predict(test_X)
yhat = yhat.reshape(yhat.shape[0],1)
test_X = test_X.reshape((test_X.shape[0], test_X.shape[2]))
# invert scaling for forecast
inv_yhat = np.concatenate((yhat, test_X), axis=1)
inv_yhat = scaler.inverse_transform(inv_yhat)
inv_yhat = inv_yhat[:,0]
# invert scaling for actual
test_y = test_y.reshape((len(test_y), 1))
inv_y = np.concatenate((test_y, test_X), axis=1)
inv_y = scaler.inverse_transform(inv_y)
inv_y = inv_y[:,2]
but the problem is that when i use inverse_transform it will change my test_X data to some values that are different from original test_X.
for example this are my first 5 data in test_X :
array([[69.34],
[69.66],
[69.6],
[69.38],
[69.51],
and this are my inv_y which are the same test_X but after inverse_transform :
array([[68.78412 ],
[68.73931 ],
[68.715935],
[68.65166 ],
[68.69646 ],
I've tried also to only fit_transform train_data and transform test_data. but had the same problem
|
[
"You are scaling your data when your label column is at the second index.\ntrain_X, train_y = train[:, [0,1,3,4]], train[:, 2]\ntest_X, test_y = test[:, [0,1,3,4]], test[:, 2]\n\nIf you inverse scale the label column is at a different position.\ninv_yhat = np.concatenate((yhat, test_X), axis=1)\ninv_y = np.concatenate((test_y, test_X), axis=1)\n\nYou should recheck your feature positions and that the array you want to rescale has the same structure as the original.\n"
] |
[
3
] |
[] |
[] |
[
"lstm",
"python",
"scikit_learn"
] |
stackoverflow_0074580438_lstm_python_scikit_learn.txt
|
Q:
How do I change the text/value of "Add [Model-Name]" button in Django Admin?
When we login to Django Admin Interface as a superuser, we see the list of models on the left sidebar. When we click on any model name, we go to the list display page of that model which have 'Add [Model-Name]" button on uuper right corner. How do I change the text/value of that button? In my case, I have User Model, and I want to change the "Add User" text on list display page of User Model to "Invite User". How do I accomplish that? I have encircled the button with red in the screenshot attached.
Django Admin Interface Screenshot
I have tried different solutions told in this stackoverflow question and in this django documentation. But I am unable to achieve. I tried to override change_form.html by changing {% blocktranslate with name=opts.verbose_name %}Add {{ name }}{% endblocktranslate %} to {% blocktranslate with name=opts.verbose_name %}Invite {{ name }}{% endblocktranslate %}. I put the overriden file change_form.html in pricingmeister/accounts/templates/admin/. But i could not see the change.
The hierarchy of my Django Project and folders is below:
Django Project Hierarchy Screenshot
Below is my settings.py (truncated some code to show only relevant part of code
.
.
.
INSTALLED_APPS = [
# Local Apps
"pricingmeister.accounts",
"pricingmeister.core",
# Django Apps
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
]
.
.
.
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
],
},
},
]
.
.
.
What am I doing wrong?
A:
You can use Meta class options {verbose_name, verbose_name_plural} to change model name.
for example,
from django.contrib.auth.models import AbstractUser
class User(AbstractUser):
...
class Meta:
verbose_name = "Invite User"
verbose_name_plural = "Invite Users"
for more details please read django documentation
|
How do I change the text/value of "Add [Model-Name]" button in Django Admin?
|
When we login to Django Admin Interface as a superuser, we see the list of models on the left sidebar. When we click on any model name, we go to the list display page of that model which have 'Add [Model-Name]" button on uuper right corner. How do I change the text/value of that button? In my case, I have User Model, and I want to change the "Add User" text on list display page of User Model to "Invite User". How do I accomplish that? I have encircled the button with red in the screenshot attached.
Django Admin Interface Screenshot
I have tried different solutions told in this stackoverflow question and in this django documentation. But I am unable to achieve. I tried to override change_form.html by changing {% blocktranslate with name=opts.verbose_name %}Add {{ name }}{% endblocktranslate %} to {% blocktranslate with name=opts.verbose_name %}Invite {{ name }}{% endblocktranslate %}. I put the overriden file change_form.html in pricingmeister/accounts/templates/admin/. But i could not see the change.
The hierarchy of my Django Project and folders is below:
Django Project Hierarchy Screenshot
Below is my settings.py (truncated some code to show only relevant part of code
.
.
.
INSTALLED_APPS = [
# Local Apps
"pricingmeister.accounts",
"pricingmeister.core",
# Django Apps
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
]
.
.
.
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
],
},
},
]
.
.
.
What am I doing wrong?
|
[
"You can use Meta class options {verbose_name, verbose_name_plural} to change model name.\nfor example,\nfrom django.contrib.auth.models import AbstractUser\n\nclass User(AbstractUser):\n ...\n \n class Meta:\n verbose_name = \"Invite User\"\n verbose_name_plural = \"Invite Users\"\n\nfor more details please read django documentation\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_admin",
"django_admin_tools",
"python",
"python_3.x"
] |
stackoverflow_0074533766_django_django_admin_django_admin_tools_python_python_3.x.txt
|
Q:
Executing external python file from inside ns3
I have a python file, containing a pre-trained model. How can I execute this file from inside ns-3 code? The python file will start execution when enough amount of data is gerenerated by the ns-3, which will be given to the pre-trained model. Later, the model predicts one value and it is used in ns-3 during simulation.
I tried Calling Python script from C++ and using its output. It is not helpful in my case. I am expecting to execute only python file from ns-3.
A:
In my case, I have tried the following piece of code in a function where I was required to execute the external python file from ns-3. This specific example is for the Ubuntu environment.
system("/[path_to_your_python]/anaconda3/bin/python /[path_to_your_inference_file]/inference.py");
Note: The inference.py file will be executed whenever the C++ function is called, making the simulation too time-consuming compared to normal circumstances.
Suggestion: I would suggest using ONNX.
|
Executing external python file from inside ns3
|
I have a python file, containing a pre-trained model. How can I execute this file from inside ns-3 code? The python file will start execution when enough amount of data is gerenerated by the ns-3, which will be given to the pre-trained model. Later, the model predicts one value and it is used in ns-3 during simulation.
I tried Calling Python script from C++ and using its output. It is not helpful in my case. I am expecting to execute only python file from ns-3.
|
[
"In my case, I have tried the following piece of code in a function where I was required to execute the external python file from ns-3. This specific example is for the Ubuntu environment.\nsystem(\"/[path_to_your_python]/anaconda3/bin/python /[path_to_your_inference_file]/inference.py\");\n\nNote: The inference.py file will be executed whenever the C++ function is called, making the simulation too time-consuming compared to normal circumstances.\nSuggestion: I would suggest using ONNX.\n"
] |
[
1
] |
[] |
[] |
[
"c++",
"ns_3",
"python"
] |
stackoverflow_0074548280_c++_ns_3_python.txt
|
Q:
Python Selenium and Docker
I'm trying to create multiple containers with RPAs using selenium and Python, how can I do this without installing python and its libraries in each container? Like a base container with all dependencies and I can export these dependencies to the other containers. Or it cannot be done?
services:
chromedriver:
container_name: chromedriver
image: selenium/standalone-chrome:latest
shm_size: 2gb
ports:
- 4444:4444
- 5900:5900
restart: always
bank_1:
build:
dockerfile: Dockerfile-bank1
container_name: bank_1
command: python3 bank_1.py
ports:
- 8000:8000
depends_on:
- chromedriver
bank_2:
build:
dockerfile: Dockerfile-bank1
container_name: bank_2
command: python3 bank_2.py
ports:
- 8001:8001
depends_on:
- chromedriver
A:
The usual way of doing this is to create an image and host it on DockerHub/ECR. When you change the code, you re-build the image and push a new version, meaning that the dependencies will be re-fetched once. And then your docker-compose services will reference this remote image as many times as needed.
To automate re-building the image, you can use tools like CircleCI or GitHub Actions.
(If you are only ever running this locally, then you may be able to skip the CI and DockerHub pieces and just build the image on your computer.)
Note also that you would typically not duplicate the service itself in the compose file, but rather use docker service scale or a reverse proxy like traefik to manage multiple identical instances.
|
Python Selenium and Docker
|
I'm trying to create multiple containers with RPAs using selenium and Python, how can I do this without installing python and its libraries in each container? Like a base container with all dependencies and I can export these dependencies to the other containers. Or it cannot be done?
services:
chromedriver:
container_name: chromedriver
image: selenium/standalone-chrome:latest
shm_size: 2gb
ports:
- 4444:4444
- 5900:5900
restart: always
bank_1:
build:
dockerfile: Dockerfile-bank1
container_name: bank_1
command: python3 bank_1.py
ports:
- 8000:8000
depends_on:
- chromedriver
bank_2:
build:
dockerfile: Dockerfile-bank1
container_name: bank_2
command: python3 bank_2.py
ports:
- 8001:8001
depends_on:
- chromedriver
|
[
"The usual way of doing this is to create an image and host it on DockerHub/ECR. When you change the code, you re-build the image and push a new version, meaning that the dependencies will be re-fetched once. And then your docker-compose services will reference this remote image as many times as needed.\nTo automate re-building the image, you can use tools like CircleCI or GitHub Actions.\n(If you are only ever running this locally, then you may be able to skip the CI and DockerHub pieces and just build the image on your computer.)\nNote also that you would typically not duplicate the service itself in the compose file, but rather use docker service scale or a reverse proxy like traefik to manage multiple identical instances.\n"
] |
[
0
] |
[] |
[] |
[
"docker",
"docker_compose",
"python",
"selenium"
] |
stackoverflow_0074574253_docker_docker_compose_python_selenium.txt
|
Q:
Replace tokens with other words with NLTK in python
this is my first question.
I've been working in this assignment in which I had to do a Notepad, and then add a lexical analyzer function in it. The goal was to write code in the notepad and then use the lexical analyzer to break it up and categorize it; and for the last part, it had to change the tokenized words categorized as "Identifiers" by Id and the number of Id that it is, and lastly print the code again with this change.
I've achieved almost everything, but this last parto of changing the tokenized words has beeen difficult to me.
`
def cmdAnalyze ():
Analyze_program = notepad.get(0.0, END)
Analyze_program_tokens = nltk.wordpunct_tokenize(Analyze_program);
RE_keywords = "auto|break|case|char|const|continue|default|print"
RE_Operators = "(\++)|(-)|(=)|(\*)|(/)|(%)|(--)|(<=)|(>=)"
RE_Numerals = "^(\d+)$"
RE_Especial_Character = "[\[@&!#$\^\|{}\]:;<>?,\.']|\(\)|\(|\)|{}|\[\]|\""
RE_Identificadores = "^[a-zA-Z_]+[a-zA-Z0-9_]*"
RE_Headers = "([a-zA-Z]+\.[h])"
# Categorización de tokens
notepad.insert(END, "\n ")
for token in Analyze_program_tokens:
if (re.findall(RE_keywords, token)):
notepad.insert(END, "\n " + token + " --------> Palabra clave")
elif (re.findall(RE_Operators, token)):
notepad.insert(END, "\n " + token + " --------> Operador")
elif (re.findall(RE_Numerals, token)):
notepad.insert(END, "\n " + token + " --------> Número")
elif (re.findall(RE_Especial_Character, token)):
notepad.insert(END, "\n " + token + " --------> Carácter especial/Símbolo")
elif (re.findall(RE_Identificadores, token)):
notepad.insert(END, "\n " + token + " --------> Identificadores")
elif (re.findall(RE_Headers, token)):
notepad.insert(END, "\n " + token + " --------> Headers")
else:
notepad.insert(END, "\n " + " Valor desconocido")
notepad.insert(END, "\n ")
notepad.insert(END, Analyze_program_tokens)
This is my current output:
>>> print(‘Hello World’)
>>> --------> Carácter especial/Símbolo
print --------> Palabra clave
(‘ --------> Carácter especial/Símbolo
Hello --------> Identificadores
World --------> Identificadores
’) --------> Carácter especial/Símbolo
>>> print (‘ Hello World ’)
`
The last line output has to be like this: ">>> print (‘ Id1 Id2 ’)"
Thank you for reading :)
A:
I would add
id_count = 0
just before the for loop, and then modify the handling of identifiers like this:
elif (re.findall(RE_Identificadores, token)):
id_count += 1
notepad.insert(END, "\n " + f"Id{id_count:02d}' + " --------> Identificadores")
EDIT
On second thoughts, what happens if an identifier occurs more than once in the notepad? "hello, hello world" should result in "Id01, Id01 Id02" or in "Id01, Id02 Id03"?
In the first case you will need a dictionary. So before the for loop let's also add
ids = {}
and use the dictionary like this
elif (re.findall(RE_Identificadores, token)):
if token not in ids:
id_count += 1
ids[token] = id_count
notepad.insert(END, "\n " + f"Id{ids[token]:02d}' + " --------> Identificadores")
|
Replace tokens with other words with NLTK in python
|
this is my first question.
I've been working in this assignment in which I had to do a Notepad, and then add a lexical analyzer function in it. The goal was to write code in the notepad and then use the lexical analyzer to break it up and categorize it; and for the last part, it had to change the tokenized words categorized as "Identifiers" by Id and the number of Id that it is, and lastly print the code again with this change.
I've achieved almost everything, but this last parto of changing the tokenized words has beeen difficult to me.
`
def cmdAnalyze ():
Analyze_program = notepad.get(0.0, END)
Analyze_program_tokens = nltk.wordpunct_tokenize(Analyze_program);
RE_keywords = "auto|break|case|char|const|continue|default|print"
RE_Operators = "(\++)|(-)|(=)|(\*)|(/)|(%)|(--)|(<=)|(>=)"
RE_Numerals = "^(\d+)$"
RE_Especial_Character = "[\[@&!#$\^\|{}\]:;<>?,\.']|\(\)|\(|\)|{}|\[\]|\""
RE_Identificadores = "^[a-zA-Z_]+[a-zA-Z0-9_]*"
RE_Headers = "([a-zA-Z]+\.[h])"
# Categorización de tokens
notepad.insert(END, "\n ")
for token in Analyze_program_tokens:
if (re.findall(RE_keywords, token)):
notepad.insert(END, "\n " + token + " --------> Palabra clave")
elif (re.findall(RE_Operators, token)):
notepad.insert(END, "\n " + token + " --------> Operador")
elif (re.findall(RE_Numerals, token)):
notepad.insert(END, "\n " + token + " --------> Número")
elif (re.findall(RE_Especial_Character, token)):
notepad.insert(END, "\n " + token + " --------> Carácter especial/Símbolo")
elif (re.findall(RE_Identificadores, token)):
notepad.insert(END, "\n " + token + " --------> Identificadores")
elif (re.findall(RE_Headers, token)):
notepad.insert(END, "\n " + token + " --------> Headers")
else:
notepad.insert(END, "\n " + " Valor desconocido")
notepad.insert(END, "\n ")
notepad.insert(END, Analyze_program_tokens)
This is my current output:
>>> print(‘Hello World’)
>>> --------> Carácter especial/Símbolo
print --------> Palabra clave
(‘ --------> Carácter especial/Símbolo
Hello --------> Identificadores
World --------> Identificadores
’) --------> Carácter especial/Símbolo
>>> print (‘ Hello World ’)
`
The last line output has to be like this: ">>> print (‘ Id1 Id2 ’)"
Thank you for reading :)
|
[
"I would add\nid_count = 0\n\njust before the for loop, and then modify the handling of identifiers like this:\nelif (re.findall(RE_Identificadores, token)):\n id_count += 1\n notepad.insert(END, \"\\n \" + f\"Id{id_count:02d}' + \" --------> Identificadores\")\n\nEDIT\nOn second thoughts, what happens if an identifier occurs more than once in the notepad? \"hello, hello world\" should result in \"Id01, Id01 Id02\" or in \"Id01, Id02 Id03\"?\nIn the first case you will need a dictionary. So before the for loop let's also add\nids = {}\n\nand use the dictionary like this\nelif (re.findall(RE_Identificadores, token)):\n if token not in ids:\n id_count += 1\n ids[token] = id_count\n notepad.insert(END, \"\\n \" + f\"Id{ids[token]:02d}' + \" --------> Identificadores\")\n\n"
] |
[
0
] |
[] |
[] |
[
"nlp",
"nltk",
"python"
] |
stackoverflow_0074568563_nlp_nltk_python.txt
|
Q:
How to build python with --enable-framework (--enable-shared) on macos?
I want to use PyInstaller to build a MultiOS application. The Project already has a virtual environment using the venv which comes with python by default (have not installed pyenv). I ran into multiple problems and searched a lot.
Finally I've come to the conclusion that the problem is my installed version of python does not have shared framework enabled and I have to rebuild my python? I actually have to no clue how to do that. Any help or a link of how to do it would really be appreciated. Thank you very much.
This is the error, which directed me here:
If you're building Python by yourself, please rebuild your Python with '--enable-shared' (or, '--enable-framework' on Darwin)
A:
For anyone using pyenv, this is what has worked for me:
PYTHON_CONFIGURE_OPTS="--enable-framework" pyenv install 3.6.15
Found here.
|
How to build python with --enable-framework (--enable-shared) on macos?
|
I want to use PyInstaller to build a MultiOS application. The Project already has a virtual environment using the venv which comes with python by default (have not installed pyenv). I ran into multiple problems and searched a lot.
Finally I've come to the conclusion that the problem is my installed version of python does not have shared framework enabled and I have to rebuild my python? I actually have to no clue how to do that. Any help or a link of how to do it would really be appreciated. Thank you very much.
This is the error, which directed me here:
If you're building Python by yourself, please rebuild your Python with '--enable-shared' (or, '--enable-framework' on Darwin)
|
[
"For anyone using pyenv, this is what has worked for me:\nPYTHON_CONFIGURE_OPTS=\"--enable-framework\" pyenv install 3.6.15\n\nFound here.\n"
] |
[
0
] |
[] |
[] |
[
"pyinstaller",
"python",
"python_3.x"
] |
stackoverflow_0060917013_pyinstaller_python_python_3.x.txt
|
Q:
VS Code Azure functions deployment failing with Python Version 3.9
I have a function app (python) in the azure portal which is in python version 3.7.
The FUNCTIONS_EXTENSION_VERSION of the function app is ~3.
When I deploy the function python) from VS code to update the function in the portal, I'm able to deploy and the update is reflected in the azure portal.
But when I change the python version to 3.9 and update FUNCTIONS_EXTENSION_VERSION to ~4 in the Azure portal and try to deploy the function(python) from VS code to update the function in the portal, deployment failed with error "deployer = ms-azuretools-vscode deploymentPath = Functions App ZipDeploy. Extract zip. Remote build."
The deployment is failing only after upgrading to version 3.9.
Could anyone please help me to understand why am I getting this error and how can we fix this?
A:
To upgrade the Python Version 3.7 to 3.9
Step 1: Update FUNCTIONS_EXTENSION_VERSION to 4 and Python version of Azure Function App in the Portal using the cmdlet:
az functionapp config set --name krishpyfunapp37to39 --resource-group HariTestRG --linux-fx-version "PYTHON|3.9"
Make Sure Runtime Version is 4 in the General Settings of the Azure Function App Configuration Menu.
Step 2:
Select the Python Interpreter as 3.9.x version in your Azure Functions Project using the VS Code IDE:
(Ctrl + Shift + P and Type as "Python Interpreter")
If the Virtual Environment is activated in the project, make sure you update the home path and version number in virtual environment folder of the project:
home = C:\Users\Hari\AppData\Local\Programs\Python\Python39
include-system-site-packages = false
version = 3.9.13
Note: Update the code and packages that matches the Python version 3.9.x when upgrading/downgrading > Test and then deploy. So that, it will not break the code due to changes.
Refer to this MS Doc for steps on migrating the Azure Functions versions 3.x to 4.x and Python Versions.
|
VS Code Azure functions deployment failing with Python Version 3.9
|
I have a function app (python) in the azure portal which is in python version 3.7.
The FUNCTIONS_EXTENSION_VERSION of the function app is ~3.
When I deploy the function python) from VS code to update the function in the portal, I'm able to deploy and the update is reflected in the azure portal.
But when I change the python version to 3.9 and update FUNCTIONS_EXTENSION_VERSION to ~4 in the Azure portal and try to deploy the function(python) from VS code to update the function in the portal, deployment failed with error "deployer = ms-azuretools-vscode deploymentPath = Functions App ZipDeploy. Extract zip. Remote build."
The deployment is failing only after upgrading to version 3.9.
Could anyone please help me to understand why am I getting this error and how can we fix this?
|
[
"To upgrade the Python Version 3.7 to 3.9\nStep 1: Update FUNCTIONS_EXTENSION_VERSION to 4 and Python version of Azure Function App in the Portal using the cmdlet:\naz functionapp config set --name krishpyfunapp37to39 --resource-group HariTestRG --linux-fx-version \"PYTHON|3.9\"\n\n\n\nMake Sure Runtime Version is 4 in the General Settings of the Azure Function App Configuration Menu.\nStep 2:\nSelect the Python Interpreter as 3.9.x version in your Azure Functions Project using the VS Code IDE:\n(Ctrl + Shift + P and Type as \"Python Interpreter\")\n\nIf the Virtual Environment is activated in the project, make sure you update the home path and version number in virtual environment folder of the project:\nhome = C:\\Users\\Hari\\AppData\\Local\\Programs\\Python\\Python39\ninclude-system-site-packages = false\nversion = 3.9.13\n\n\nNote: Update the code and packages that matches the Python version 3.9.x when upgrading/downgrading > Test and then deploy. So that, it will not break the code due to changes.\n\nRefer to this MS Doc for steps on migrating the Azure Functions versions 3.x to 4.x and Python Versions.\n"
] |
[
0
] |
[] |
[] |
[
"azure_functions",
"azure_functions_core_tools",
"python",
"visual_studio_code"
] |
stackoverflow_0074572839_azure_functions_azure_functions_core_tools_python_visual_studio_code.txt
|
Q:
How to replace countries other than 'India' and 'U.S.A' by 'Other' in pandas dataframe?
I have the following df:
df = pd.DataFrame({
'Q0_0': ["India", "Algeria", "India", "U.S.A", "Morocco", "Tunisia", "U.S.A", "France", "Russia", "Algeria"],
'Q1_1': [np.random.randint(1,100) for i in range(10)],
'Q1_2': np.random.random(10),
'Q1_3': np.random.randint(2, size=10),
'Q2_1': [np.random.randint(1,100) for i in range(10)],
'Q2_2': np.random.random(10),
'Q2_3': np.random.randint(2, size=10)
})
It has following display:
Q0_0
Q1_1
Q1_2
Q1_3
Q2_1
Q2_2
Q2_3
0
India
21
0.326856
0
51
0.520506
0
1
Algeria
7
0.504580
1
43
0.953744
1
2
India
67
0.327273
1
34
0.840453
1
3
U.S.A
49
0.056478
0
67
0.309559
1
4
Morocco
71
0.743913
1
76
0.240706
1
5
Tunisia
31
0.060707
1
78
0.576598
0
6
U.S.A
25
0.588239
1
61
0.133856
1
7
France
99
0.991723
0
85
0.274825
1
8
Russia
9
0.846950
1
61
0.279948
1
9
Algeria
79
0.176326
1
78
0.881051
1
I need to change countries other than India and U.S.A to Òther in column Q0_0.
Desired output
Q0_0 Q1_1 Q1_2 Q1_3 Q2_1 Q2_2 Q2_3
0 India 21 0.326856 0 51 0.520506 0
1 Other 7 0.504580 1 43 0.953744 1
2 India 67 0.327273 1 34 0.840453 1
3 U.S.A 49 0.056478 0 67 0.309559 1
4 Other 71 0.743913 1 76 0.240706 1
5 Other 31 0.060707 1 78 0.576598 0
6 U.S.A 25 0.588239 1 61 0.133856 1
7 Other 99 0.991723 0 85 0.274825 1
8 Other 9 0.846950 1 61 0.279948 1
9 Other 79 0.176326 1 78 0.881051 1
I tried to use pandas.series.str.replace() but it didn't work.
Any help from your side will be highly appreciated, thanks.
A:
You can use pandas.Series.mask with pandas.Series.fillna :
df["Q0_0"]= df["Q0_0"].mask(~df["Q0_0"].isin(["India", "U.S.A"])).fillna("Other")
# Output :
print(df)
Q0_0 Q1_1 Q1_2 Q1_3 Q2_1 Q2_2 Q2_3
0 India 43 0.681795 0 36 0.772289 0
1 Other 85 0.695352 1 14 0.989219 1
2 India 69 0.684015 1 85 0.687373 0
3 U.S.A 10 0.175235 1 52 0.825989 1
4 Other 90 0.998192 0 59 0.482667 0
5 Other 27 0.723308 0 90 0.054042 1
6 U.S.A 38 0.973819 0 69 0.536380 1
7 Other 10 0.815710 1 2 0.134707 1
8 Other 38 0.238863 1 1 0.872125 1
9 Other 96 0.078010 0 84 0.650347 0
A:
You could use:
df['Q0_0'] = df['Q0_0'].str.replace('Algeria', 'Other')
|
How to replace countries other than 'India' and 'U.S.A' by 'Other' in pandas dataframe?
|
I have the following df:
df = pd.DataFrame({
'Q0_0': ["India", "Algeria", "India", "U.S.A", "Morocco", "Tunisia", "U.S.A", "France", "Russia", "Algeria"],
'Q1_1': [np.random.randint(1,100) for i in range(10)],
'Q1_2': np.random.random(10),
'Q1_3': np.random.randint(2, size=10),
'Q2_1': [np.random.randint(1,100) for i in range(10)],
'Q2_2': np.random.random(10),
'Q2_3': np.random.randint(2, size=10)
})
It has following display:
Q0_0
Q1_1
Q1_2
Q1_3
Q2_1
Q2_2
Q2_3
0
India
21
0.326856
0
51
0.520506
0
1
Algeria
7
0.504580
1
43
0.953744
1
2
India
67
0.327273
1
34
0.840453
1
3
U.S.A
49
0.056478
0
67
0.309559
1
4
Morocco
71
0.743913
1
76
0.240706
1
5
Tunisia
31
0.060707
1
78
0.576598
0
6
U.S.A
25
0.588239
1
61
0.133856
1
7
France
99
0.991723
0
85
0.274825
1
8
Russia
9
0.846950
1
61
0.279948
1
9
Algeria
79
0.176326
1
78
0.881051
1
I need to change countries other than India and U.S.A to Òther in column Q0_0.
Desired output
Q0_0 Q1_1 Q1_2 Q1_3 Q2_1 Q2_2 Q2_3
0 India 21 0.326856 0 51 0.520506 0
1 Other 7 0.504580 1 43 0.953744 1
2 India 67 0.327273 1 34 0.840453 1
3 U.S.A 49 0.056478 0 67 0.309559 1
4 Other 71 0.743913 1 76 0.240706 1
5 Other 31 0.060707 1 78 0.576598 0
6 U.S.A 25 0.588239 1 61 0.133856 1
7 Other 99 0.991723 0 85 0.274825 1
8 Other 9 0.846950 1 61 0.279948 1
9 Other 79 0.176326 1 78 0.881051 1
I tried to use pandas.series.str.replace() but it didn't work.
Any help from your side will be highly appreciated, thanks.
|
[
"You can use pandas.Series.mask with pandas.Series.fillna :\ndf[\"Q0_0\"]= df[\"Q0_0\"].mask(~df[\"Q0_0\"].isin([\"India\", \"U.S.A\"])).fillna(\"Other\")\n\n# Output :\nprint(df)\n\n Q0_0 Q1_1 Q1_2 Q1_3 Q2_1 Q2_2 Q2_3\n0 India 43 0.681795 0 36 0.772289 0\n1 Other 85 0.695352 1 14 0.989219 1\n2 India 69 0.684015 1 85 0.687373 0\n3 U.S.A 10 0.175235 1 52 0.825989 1\n4 Other 90 0.998192 0 59 0.482667 0\n5 Other 27 0.723308 0 90 0.054042 1\n6 U.S.A 38 0.973819 0 69 0.536380 1\n7 Other 10 0.815710 1 2 0.134707 1\n8 Other 38 0.238863 1 1 0.872125 1\n9 Other 96 0.078010 0 84 0.650347 0\n\n",
"You could use:\ndf['Q0_0'] = df['Q0_0'].str.replace('Algeria', 'Other')\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074581359_dataframe_pandas_python.txt
|
Q:
Clear way to check if n consecutive values out of N are bigger than a threshold and save their index (corresponding to n)
I would like to locate n consecutive points out of N from a vector of data with length L.
Example: 2 out 3 consecutive points bigger than a certain threshold tr.
data = [201,202,203, ..., L]
L = len(data)
N=3
tr = 200
for i in range(L-N+1):
subset = data[i:i+N]
if (subset[0] > tr and subset[1] > tr) or (subset[1] > tr and subset[2] > tr):
"save index"
I would like to know how to save the index of the elements that satisfy such conditions?
Is there an elegant way to do it more flexibly (n out of N)?
A:
Start by flagging the items that are within the desired range. Then perform a rolling sum of the flags and select the matching indexes in subranges that have the minimum count of flagged items.
from itertools import islice
def getOver(data,minVal=200,minCount=2,window=3):
inRange = [minVal<=n for n in data]
groups = (islice(inRange,s,None) for s in range(window))
indexes = { i for s,r in enumerate(zip(*groups)) if sum(r)>=minCount
for i in range(s,s+window) if inRange[i]}
return sorted(indexes)
print(getOver([100,205,205]))
[1, 2]
print(getOver([201,205,205,150,190,203,100,205]))
[0, 1, 2, 5, 7]
A:
I just noticed that Alin T. answer is not completely correct. For example: getOver([203,100,205]) we will get [0,2], which is not quite correct as there're not n (2) consecutive elements greater than a defined threshold (200). The output should be an empty array ([]).
Here is a solution:
def xoutXconsecutive(data, tr, n):
out = []
for i in range(0, len(data)-n+1):
subset = np.array(data[i:i+n])
isNotConsecutive = [False]*n
for j in range(0, len(subset)):
if (subset[j] < tr):
isNotConsecutive[j] = True
aux = False
for k in range(1, len(isNotConsecutive)):
if isNotConsecutive[k]:
aux = True
break
if ( (isNotConsecutive[0] == False) and (aux == False) ):
out += [i+(n-2), i+(n-1)]
elif ( (isNotConsecutive[0] == True) and (aux == False)):
out += [i+(n-1)]
return np.unique(out)
print(xoutXconsecutive([203,100,205], 200, 3))
[]
|
Clear way to check if n consecutive values out of N are bigger than a threshold and save their index (corresponding to n)
|
I would like to locate n consecutive points out of N from a vector of data with length L.
Example: 2 out 3 consecutive points bigger than a certain threshold tr.
data = [201,202,203, ..., L]
L = len(data)
N=3
tr = 200
for i in range(L-N+1):
subset = data[i:i+N]
if (subset[0] > tr and subset[1] > tr) or (subset[1] > tr and subset[2] > tr):
"save index"
I would like to know how to save the index of the elements that satisfy such conditions?
Is there an elegant way to do it more flexibly (n out of N)?
|
[
"Start by flagging the items that are within the desired range. Then perform a rolling sum of the flags and select the matching indexes in subranges that have the minimum count of flagged items.\nfrom itertools import islice\n\ndef getOver(data,minVal=200,minCount=2,window=3):\n inRange = [minVal<=n for n in data]\n groups = (islice(inRange,s,None) for s in range(window))\n indexes = { i for s,r in enumerate(zip(*groups)) if sum(r)>=minCount\n for i in range(s,s+window) if inRange[i]}\n return sorted(indexes)\n\nprint(getOver([100,205,205]))\n[1, 2]\n\nprint(getOver([201,205,205,150,190,203,100,205]))\n[0, 1, 2, 5, 7]\n\n",
"I just noticed that Alin T. answer is not completely correct. For example: getOver([203,100,205]) we will get [0,2], which is not quite correct as there're not n (2) consecutive elements greater than a defined threshold (200). The output should be an empty array ([]).\nHere is a solution:\ndef xoutXconsecutive(data, tr, n):\n out = []\n for i in range(0, len(data)-n+1):\n subset = np.array(data[i:i+n])\n isNotConsecutive = [False]*n\n for j in range(0, len(subset)):\n if (subset[j] < tr):\n isNotConsecutive[j] = True\n aux = False\n for k in range(1, len(isNotConsecutive)):\n if isNotConsecutive[k]:\n aux = True\n break\n if ( (isNotConsecutive[0] == False) and (aux == False) ):\n out += [i+(n-2), i+(n-1)]\n elif ( (isNotConsecutive[0] == True) and (aux == False)):\n out += [i+(n-1)]\n\n return np.unique(out)\n\nprint(xoutXconsecutive([203,100,205], 200, 3))\n[]\n"
] |
[
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0071179081_python.txt
|
Q:
Pandas: sum next 5 items of dataframe after some specific item
I have DataFrame which looks like just a list of numbers:
original
option 1
option 2
1
NaN
NaN
-1
NaN
9
4
NaN
NaN
-1
NaN
15
6
9
NaN
7
NaN
NaN
2
15
NaN
3
NaN
NaN
0
NaN
NaN
I need to sum next 3 values of df after each negative value - see "option1" or "option2" columns.
If will also work if I get only sum results, i.e. a separate data structure which would look like [9, 15].
Any thoughts?
A:
One approach could be as follows:
import pandas as pd
data = {'original': {0: 1, 1: -1, 2: 4, 3: -1, 4: 6, 5: 7, 6: 2, 7: 3, 8: 0}}
df = pd.DataFrame(data)
n = 3
df['option 1'] = (df['original'].rolling(n).sum()
.where(df['original'].shift(n).lt(0))
)
df['option 2'] = df['option 1'].shift(-n)
print(df)
original option 1 option 2
0 1 NaN NaN
1 -1 NaN 9.0
2 4 NaN NaN
3 -1 NaN 15.0
4 6 9.0 NaN
5 7 NaN NaN
6 2 15.0 NaN
7 3 NaN NaN
8 0 NaN NaN
Explanation
First, use Series.rolling to create a rolling window for applying sum.
Next, chain Series.where and set the cond parameter to an evaluation of values less than zero (lt) for a shifted (shift) version of column original.
For option 2 we simply apply a negative shift on option 1.
|
Pandas: sum next 5 items of dataframe after some specific item
|
I have DataFrame which looks like just a list of numbers:
original
option 1
option 2
1
NaN
NaN
-1
NaN
9
4
NaN
NaN
-1
NaN
15
6
9
NaN
7
NaN
NaN
2
15
NaN
3
NaN
NaN
0
NaN
NaN
I need to sum next 3 values of df after each negative value - see "option1" or "option2" columns.
If will also work if I get only sum results, i.e. a separate data structure which would look like [9, 15].
Any thoughts?
|
[
"One approach could be as follows:\nimport pandas as pd\n\ndata = {'original': {0: 1, 1: -1, 2: 4, 3: -1, 4: 6, 5: 7, 6: 2, 7: 3, 8: 0}}\ndf = pd.DataFrame(data)\n\nn = 3\n\ndf['option 1'] = (df['original'].rolling(n).sum()\n .where(df['original'].shift(n).lt(0))\n )\n \ndf['option 2'] = df['option 1'].shift(-n)\n\nprint(df)\n\n original option 1 option 2\n0 1 NaN NaN\n1 -1 NaN 9.0\n2 4 NaN NaN\n3 -1 NaN 15.0\n4 6 9.0 NaN\n5 7 NaN NaN\n6 2 15.0 NaN\n7 3 NaN NaN\n8 0 NaN NaN\n\nExplanation\n\nFirst, use Series.rolling to create a rolling window for applying sum.\nNext, chain Series.where and set the cond parameter to an evaluation of values less than zero (lt) for a shifted (shift) version of column original.\nFor option 2 we simply apply a negative shift on option 1.\n\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074581217_dataframe_pandas_python.txt
|
Q:
ImportError: No module named 'pygame'
I have installed python 3.3.2 and pygame 1.9.2a0. Whenever I try to import pygame by typing:
import pygame
I get following error message :
Python 3.3.2 (v3.3.2:d047928ae3f6, May 16 2013, 00:03:43) [MSC v.1600 32 bit (Intel)] on win32
Type "copyright", "credits" or "license()" for more information.
>>> import pygame
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import pygame
ImportError: No module named 'pygame'
>>>
I went through some of the questions related to this error but none of the solution helped.
I have 64 bit machine with Win7 OS
A:
go to python/scripts folder, open a command window to this path, type the
following:
C:\python34\scripts> python -m pip install pygame
To test it, open python IDE and type
import pygame
print (pygame.ver)
It worked for me...
A:
Here are instructions for users with the newer Python 3.5 (Google brought me here, I suspect other 3.5 users might end up here as well):
I just successfully installed Pygame 1.9.2a0-cp35 on Windows and it runs with Python 3.5.1.
Install Python, and remember the install location
Go here and download pygame-1.9.2a0-cp35-none-win32.whl
Move the downloaded .whl file to your python35/Scripts directory
Open a command prompt in the Scripts directory (Shift-Right click in the directory > Open a command window here)
Enter the command:
pip3 install pygame-1.9.2a0-cp35-none-win32.whl
If you get an error in the last step, try:
python -m pip install pygame-1.9.2a0-cp35-none-win32.whl
And that should do it. Tested as working on Windows 10 64bit.
A:
I was trying to figure this out for at least an hour. And you're right the problem is that the installation files are all for 32 bit.
Luckily I found a link to the 64 pygame download! Here it is: http://www.lfd.uci.edu/~gohlke/pythonlibs/#pygame
Just pick the corresponding version according to your python version and it should work like magic. The installation feature will bring you to a bright-blue screen as the installation (at this point you know that the installation is correct for you.
Then go into the Python IDLE and type "import pygame" and you should not get any more errors.
Props go to @yuvi who shared the link with StackOverflow.
A:
open the folder where your python is installed
open scripts folder
type cmd in the address bar. It opens a command prompt window in that location
type pip install pygame and press enter
it should download and install pygame module
now run your code. It works fine :-)
A:
I had the same problem and discovered that Pygame doesn't work for Python3 at least on the Mac OS, but I also have Tython2 installed in my computer as you probably do too, so when I use Pygame, I switch the path so that it uses python2 instead of python3. I use Sublime Text as my text editor so I just go to
Tools > Build Systems > New Build System and enter the following:
{
"cmd": ["/usr/local/bin/python", "-u", "$file"],
}
instead of
{
"cmd": ["/usr/local/bin/python3", "-u", "$file"],
}
in my case. And when I'm not using pygame, I simply change the path back so that I can use Python3.
A:
The current PyGame release, 1.9.6 doesn't support Python 3.9. I fyou don't want to wait for PyGame 2.0, you have to use Python 3.8. Alternatively, you can install a developer version by explicitly specifying the version (2.0.0.dev20 is the latest release at the time of writing):
pip install pygame==2.0.0.dev20
or try to install a pre-release version by enabling the --pre option:
pip install pygame --pre
A:
Resolved !
Here is an example
C:\Users\user\AppData\Local\Programs\Python\Python36-32\Scripts>pip install pygame
A:
try this in your command prompt:
python -m pip install pygame
A:
I was getting the same error. It is because your version of Pygame is not compatible with your version of Python or Pydev. Go to this link and get the proper version of Pygame for your current version of Python. Ctrl F to find it faster or click on the word python in blue. up at the top. While you instal Pygame it should find the Python path by itself. At least mind did any ways. I run Pygame through Eclipse with Python 3.4.
http://www.lfd.uci.edu/~gohlke/pythonlibs/
A:
Since no answer stated this:
Make sure that, if you are using a virtual environment, you have activated it before trying to run the program.
If you don't really know if you are using a virtual environment or not, check with the other contributors of the project. Or maybe try to find a file with the name activate like this: find . -name activate.
A:
Install and download pygame .whl file.
Move .whl file to your python35/Scripts
Go to cmd
Change directory to python scripts
Type:
pip install pygame
Here is an example:
C:\Users\user\AppData\Local\Programs\Python\Python36-32\Scripts>pip install pygame
A:
Just use this command in the terminal python3 -m pip install -U pygame --user
A:
I am a quite newbie to python and I was having same issue. (windows x64 os)
I have solved, doing below steps
I removed python (x64 version) and pygame
I have downloaded and installed python 2.6.6 x86: https://www.python.org/ftp/python/2.6.6/python-2.6.6.msi
I have downloaded and installed pygame (when installing, I have chosen the directory that I installed python): http://pygame.org/ftp/pygame-1.9.1.win32-py2.6.msi
Works well :)
A:
You don't need 64 bit Python on Win64 system, just install the 32bit versions of both Python and Pygame and they will work just fine (and there is a ton more modules for them anyways).
A:
I’m using the PyCharm IDE. I could get Pygame to work with IDLE but not with PyCharm. This video helped me install Pygame through PyCharm.
https://youtu.be/HJ9bTO5yYw0
(It seems that PyCharm only recognizes a package; if you use its GUI.)
However, there were a few slight differences for me; because I’m using Windows instead of a Mac.
My “preferences” menu is found in: File->Settings…
Then, in the next screen, I expanded my project menu, and clicked Project Interpreter. Then I clicked the green plus icon to the right to get to the Available Packages screen.
A:
I ran into the error a few days ago! Thankfully, I found the answer.
You see, the problem is that pygame comes in a .whl (wheel) file/package. So, as a result, you have to pip install it.
Pip installing is a very tricky process, so please be careful. The steps are:-
Step1. Go to C:/Python (whatever version you are using)/Scripts. Scroll down. If you see a file named pip.exe, then that means that you are in the right folder. Copy the path.
Step2. In your computer, search for Environment Variables. You should see an option labeled 'Edit the System Environment Variables'. Click on it.
Step3. There, you should see a dialogue box appear. Click 'Environment Variables'. Click on 'Path'. Then, click 'New'. Paste the path that you copies earlier.
Step4. Click 'Ok'.
Step5. Shift + Right Click wherever your pygame is installed. Select 'Open Command Window Here' from the dropdown menu. Type in 'pip install py' then click tab and the full file name should fill in. Then, press Enter, and you're ready to go! Now you shouldn't get the error again!!!
A:
First execute python3 then type the command import pygame,now you can see the output
A:
For this you have to install pygame package from the cmd (on Windows) or from terminal (on mac). Just type pip install pygame
.If it doesn't work for you, then try using this statement pip3 install pygame .
If it is still showing an error then you don't have pip installed on your device and try installing pip first.
A:
I just encountered the same problem and found that I am having multiple interpreters of the different versions installed in my system and pygame got installed in one of them when I installed it using command but in my IDE another interpreter was selected so this messed up my system, try to see if you are also having the same situation.
A:
make sure if you are on windows that your library directory is added to path
A:
This may happen when pygame didn't installed, install the pygame first
pip
pip install pygame
if dont work update the PIP by goto python install folder and type
python -m pip install --upgrade pip
hope it work
A:
Try this solution:
Type in to cmd (Windows):
C:\Users\'Your name'> pip install -U pygame
You should remove python -m, py -m, python3 -m before the pip
Also remove --user behind it.
It will said:
C:\Users\viait>pip install -U pygame
Defaulting to user installation because normal site-packages is not writeable
Collecting pygame
Downloading pygame-2.1.2-cp310-cp310-win_amd64.whl (8.4 MB)
---------------------------------------- 8.4/8.4 MB 1.7 MB/s eta 0:00:00
Installing collected packages: pygame
Successfully installed pygame-2.1.2
Then test it in your IDE or cmd:
(CMD example)
C:\Users\viait>python
Python 3.10.3 (tags/v3.10.3:a342a49, Mar 16 2022, 13:07:40) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
pygame 2.1.2 (SDL 2.0.18, Python 3.10.3)
Hello from the pygame community. https://www.pygame.org/contribute.html
(IDE example)
import pygame
You can do this without any errors.
|
ImportError: No module named 'pygame'
|
I have installed python 3.3.2 and pygame 1.9.2a0. Whenever I try to import pygame by typing:
import pygame
I get following error message :
Python 3.3.2 (v3.3.2:d047928ae3f6, May 16 2013, 00:03:43) [MSC v.1600 32 bit (Intel)] on win32
Type "copyright", "credits" or "license()" for more information.
>>> import pygame
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import pygame
ImportError: No module named 'pygame'
>>>
I went through some of the questions related to this error but none of the solution helped.
I have 64 bit machine with Win7 OS
|
[
"go to python/scripts folder, open a command window to this path, type the\nfollowing:\nC:\\python34\\scripts> python -m pip install pygame\n\nTo test it, open python IDE and type\nimport pygame\n\nprint (pygame.ver)\n\nIt worked for me...\n",
"Here are instructions for users with the newer Python 3.5 (Google brought me here, I suspect other 3.5 users might end up here as well):\nI just successfully installed Pygame 1.9.2a0-cp35 on Windows and it runs with Python 3.5.1. \n\nInstall Python, and remember the install location \nGo here and download pygame-1.9.2a0-cp35-none-win32.whl\nMove the downloaded .whl file to your python35/Scripts directory\nOpen a command prompt in the Scripts directory (Shift-Right click in the directory > Open a command window here)\nEnter the command:\npip3 install pygame-1.9.2a0-cp35-none-win32.whl\nIf you get an error in the last step, try:\npython -m pip install pygame-1.9.2a0-cp35-none-win32.whl\n\nAnd that should do it. Tested as working on Windows 10 64bit.\n",
"I was trying to figure this out for at least an hour. And you're right the problem is that the installation files are all for 32 bit.\nLuckily I found a link to the 64 pygame download! Here it is: http://www.lfd.uci.edu/~gohlke/pythonlibs/#pygame \nJust pick the corresponding version according to your python version and it should work like magic. The installation feature will bring you to a bright-blue screen as the installation (at this point you know that the installation is correct for you.\nThen go into the Python IDLE and type \"import pygame\" and you should not get any more errors. \nProps go to @yuvi who shared the link with StackOverflow.\n",
"\nopen the folder where your python is installed\nopen scripts folder\ntype cmd in the address bar. It opens a command prompt window in that location\ntype pip install pygame and press enter\nit should download and install pygame module\nnow run your code. It works fine :-)\n\n",
"I had the same problem and discovered that Pygame doesn't work for Python3 at least on the Mac OS, but I also have Tython2 installed in my computer as you probably do too, so when I use Pygame, I switch the path so that it uses python2 instead of python3. I use Sublime Text as my text editor so I just go to \nTools > Build Systems > New Build System and enter the following:\n{\n \"cmd\": [\"/usr/local/bin/python\", \"-u\", \"$file\"], \n}\n\ninstead of \n{\n \"cmd\": [\"/usr/local/bin/python3\", \"-u\", \"$file\"], \n}\n\nin my case. And when I'm not using pygame, I simply change the path back so that I can use Python3.\n",
"The current PyGame release, 1.9.6 doesn't support Python 3.9. I fyou don't want to wait for PyGame 2.0, you have to use Python 3.8. Alternatively, you can install a developer version by explicitly specifying the version (2.0.0.dev20 is the latest release at the time of writing):\npip install pygame==2.0.0.dev20\n\nor try to install a pre-release version by enabling the --pre option:\npip install pygame --pre\n\n",
"Resolved !\nHere is an example\nC:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python36-32\\Scripts>pip install pygame\n\n",
"try this in your command prompt:\npython -m pip install pygame\n",
"I was getting the same error. It is because your version of Pygame is not compatible with your version of Python or Pydev. Go to this link and get the proper version of Pygame for your current version of Python. Ctrl F to find it faster or click on the word python in blue. up at the top. While you instal Pygame it should find the Python path by itself. At least mind did any ways. I run Pygame through Eclipse with Python 3.4.\nhttp://www.lfd.uci.edu/~gohlke/pythonlibs/\n",
"Since no answer stated this:\nMake sure that, if you are using a virtual environment, you have activated it before trying to run the program.\nIf you don't really know if you are using a virtual environment or not, check with the other contributors of the project. Or maybe try to find a file with the name activate like this: find . -name activate.\n",
"\nInstall and download pygame .whl file.\nMove .whl file to your python35/Scripts\nGo to cmd\nChange directory to python scripts\nType:\npip install pygame\n\n\nHere is an example:\nC:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python36-32\\Scripts>pip install pygame\n\n",
"Just use this command in the terminal python3 -m pip install -U pygame --user\n",
"I am a quite newbie to python and I was having same issue. (windows x64 os) \nI have solved, doing below steps\n\nI removed python (x64 version) and pygame\nI have downloaded and installed python 2.6.6 x86: https://www.python.org/ftp/python/2.6.6/python-2.6.6.msi\nI have downloaded and installed pygame (when installing, I have chosen the directory that I installed python): http://pygame.org/ftp/pygame-1.9.1.win32-py2.6.msi\nWorks well :)\n\n",
"You don't need 64 bit Python on Win64 system, just install the 32bit versions of both Python and Pygame and they will work just fine (and there is a ton more modules for them anyways).\n",
"I’m using the PyCharm IDE. I could get Pygame to work with IDLE but not with PyCharm. This video helped me install Pygame through PyCharm. \nhttps://youtu.be/HJ9bTO5yYw0\n(It seems that PyCharm only recognizes a package; if you use its GUI.) \nHowever, there were a few slight differences for me; because I’m using Windows instead of a Mac. \nMy “preferences” menu is found in: File->Settings…\nThen, in the next screen, I expanded my project menu, and clicked Project Interpreter. Then I clicked the green plus icon to the right to get to the Available Packages screen. \n",
"I ran into the error a few days ago! Thankfully, I found the answer. \nYou see, the problem is that pygame comes in a .whl (wheel) file/package. So, as a result, you have to pip install it. \nPip installing is a very tricky process, so please be careful. The steps are:- \nStep1. Go to C:/Python (whatever version you are using)/Scripts. Scroll down. If you see a file named pip.exe, then that means that you are in the right folder. Copy the path. \nStep2. In your computer, search for Environment Variables. You should see an option labeled 'Edit the System Environment Variables'. Click on it. \nStep3. There, you should see a dialogue box appear. Click 'Environment Variables'. Click on 'Path'. Then, click 'New'. Paste the path that you copies earlier. \nStep4. Click 'Ok'. \nStep5. Shift + Right Click wherever your pygame is installed. Select 'Open Command Window Here' from the dropdown menu. Type in 'pip install py' then click tab and the full file name should fill in. Then, press Enter, and you're ready to go! Now you shouldn't get the error again!!!\n",
"First execute python3 then type the command import pygame,now you can see the output \n",
"For this you have to install pygame package from the cmd (on Windows) or from terminal (on mac). Just type pip install pygame\n.If it doesn't work for you, then try using this statement pip3 install pygame .\nIf it is still showing an error then you don't have pip installed on your device and try installing pip first.\n",
"I just encountered the same problem and found that I am having multiple interpreters of the different versions installed in my system and pygame got installed in one of them when I installed it using command but in my IDE another interpreter was selected so this messed up my system, try to see if you are also having the same situation.\n",
"make sure if you are on windows that your library directory is added to path\n",
"This may happen when pygame didn't installed, install the pygame first\npip\npip install pygame\n\nif dont work update the PIP by goto python install folder and type\npython -m pip install --upgrade pip\n\nhope it work\n",
"Try this solution:\nType in to cmd (Windows):\nC:\\Users\\'Your name'> pip install -U pygame\n\nYou should remove python -m, py -m, python3 -m before the pip\nAlso remove --user behind it.\nIt will said:\nC:\\Users\\viait>pip install -U pygame\nDefaulting to user installation because normal site-packages is not writeable\nCollecting pygame\n Downloading pygame-2.1.2-cp310-cp310-win_amd64.whl (8.4 MB)\n ---------------------------------------- 8.4/8.4 MB 1.7 MB/s eta 0:00:00\nInstalling collected packages: pygame\nSuccessfully installed pygame-2.1.2\n\nThen test it in your IDE or cmd:\n(CMD example)\nC:\\Users\\viait>python\nPython 3.10.3 (tags/v3.10.3:a342a49, Mar 16 2022, 13:07:40) [MSC v.1929 64 bit (AMD64)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import pygame\npygame 2.1.2 (SDL 2.0.18, Python 3.10.3)\nHello from the pygame community. https://www.pygame.org/contribute.html\n\n(IDE example)\nimport pygame\n\nYou can do this without any errors.\n"
] |
[
32,
14,
12,
10,
3,
3,
2,
2,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] |
[
"You gotta use Pycharm and install it in Terminal using pip install pygame and also after that enter Pycharm and hover on pygame in the \"Import pygame\" and in Pycharm it will tell you to download that and you can easily download it and enjoy your result\n",
"You could use\npip install pygame\n\nbut if you use IDE like PyCharm, then you could just either install it from Python Packages or use right click at the package name then left click on Show Context Actions then left click on Install package pygame\n(Personally, I recommended using Python Packages for the package installing because it has documentation with it)\n",
"I was having the same trouble and I did\npip install pygame\n\nand that worked for me!\n"
] |
[
-1,
-1,
-4
] |
[
"import",
"pygame",
"python"
] |
stackoverflow_0018317521_import_pygame_python.txt
|
Q:
Why does a yield from inside __next__() return generator object?
I am using yield to return the next value in the __next__() function in my class. However it does not return the next value, it returns the generator object.
I am trying to better understand iterators and yield. I might be doing it in the wrong way.
Have a look.
class MyString:
def __init__(self,s):
self.s=s
def __iter__(self):
return self
def __next__(self):
for i in range(len(self.s)):
yield(self.s[i])
r=MyString("abc")
i=iter(r)
print(next(i))
This returns:
generator object __next__ at 0x032C05A0
A:
next pretty much just calls __next__() in this case. Calling __next__ on your object will start the generator and return it (no magic is done at this point).
In this case, you might be able to get away with not defining __next__ at all:
class MyString:
def __init__(self,s):
self.s=s
def __iter__(self):
for i in range(len(self.s)):
yield(self.s[i])
# Or...
# for item in self.s:
# yield item
If you wanted to use __iter__ and __next__ (to define an iterator rather than simply making an iterable), you'd probably want to do something like this:
class MyString:
def __init__(self,s):
self.s = s
self._ix = None
def __iter__(self):
return self
def __next__(self):
if self._ix is None:
self._ix = 0
try:
item = self.s[self._ix]
except IndexError:
# Possibly reset `self._ix`?
raise StopIteration
self._ix += 1
return item
A:
Let's take a look at the purpose of the __next__ method. From the docs:
iterator.__next__()
Return the next item from the container. If there are no further items, raise the StopIteration exception.
Now let's see what the yield statement does. Another excerpt from the docs:
Using a yield expression in a function’s body causes that function to
be a generator
And
When a generator function is called, it returns an iterator known as a
generator.
Now compare __next__ and yield: __next__ returns the next item from the container. But a function containing the yield keyword returns an iterator. Consequently, using yield in a __next__ method results in an iterator that yields iterators.
If you want to use yield to make your class iterable, do it in the __iter__ method:
class MyString:
def __init__(self, s):
self.s = s
def __iter__(self):
for s in self.s:
yield s
The __iter__ method is supposed to return an iterator - and the yield keyword makes it do exactly that.
For completeness, here is how you would implement an iterator with a __next__ method. You have to keep track of the state of the iteration, and return the corresponding value. The easiest solution is probably to increment an index every time __next__ is called:
class MyString:
def __init__(self,s):
self.s = s
self.index = -1
def __iter__(self):
return self
def __next__(self):
self.index += 1
if self.index >= len(self.s):
raise StopIteration
return self.s[self.index]
A:
As far as I can tell, generator functions are just syntactic sugar for classes with a next function. Example:
>>> def f():
i = 0
while True:
i += 1
yield i
>>> x = f()
>>> x
<generator object f at 0x0000000000659938>
>>> next(x)
1
>>> next(x)
2
>>> next(x)
3
>>> class g(object):
def __init__(self):
self.i = 0
def __next__(self):
self.i += 1
return self.i
>>> y = g()
>>> y
<__main__.g object at 0x000000000345D908>
>>> next(y)
1
>>> next(y)
2
>>> next(y)
3
In fact, I came here looking to see if there is any significant difference. Please shout if there is.
So, to answer the question, what you have is a class with a __next__ method that returns an object that also has a __next__ method. So the simplest thing to do would be to replace your yield with a return and to keep track of how far along you are, and to remember to raise a StopIteration when you reach the end of the array. So something like:
class MyString:
def __init__(self,s):
self.s=s
self._i = -1
def __iter__(self):
return self
def __next__(self):
self._i += 1
if self._i >= len(self.s):
raise StopIteration
return self.s[self._i]
That's probably the simplest way to achieve what I think you're looking for.
A:
OBSERVATION
If next() function calls __next__() method, what's happening in the following example.
Code:
class T:
def __init__(self):
self.s = 10
def __iter__(self):
for i in range(self.s):
yield i
def __next__(self):
print('__next__ method is called.')
if __name__== '__main__':
obj = T()
k = iter(obj)
print(next(k)) #0
print(next(k)) #1
print(next(k)) #2
print(next(k)) #3
print(next(k)) #4
print(next(k)) #5
print(next(k)) #6
print(next(k)) #7
print(next(k)) #8
print(next(k)) #9
print(next(k))
print(next(k))
Terminal:
C:...>python test.py
0
1
2
3
4
5
6
7
8
9
Traceback (most recent call last):
File "test.py", line 25, in <module>
print(next(k))
StopIteration
WHAT IS HAPPENING?
It seams that next() function does not calling __next__ method. I cannot understand why python docs states that "next(iterator, default) Retrieve the next item from the iterator by calling its __next__() method." If someonw knows, let us help!
Case: __iter__ with __next__ in custom class with yield
So, if you want to use yield (in order to create a generator) with __iter__ and __next__ methods in a custom class, do not put just the yield into the __next__ method, but use it with __iter__(self) and return self.__next__() instead return self.
Code:
class T:
def __init__(self):
self.s = 10
def __iter__(self):
return self.__next__()
def __next__(self):
for i in range(self.s):
yield i
if __name__== '__main__':
obj = T()
for i in obj:
print(i)
Terminal:
C:\...>python test.py
0
1
2
3
4
5
6
7
8
9
C:...>
Also, you can call from __iter__ any other method instead __next__().
Code:
class T:
def __init__(self):
self.s = 10
def __iter__(self):
return self.foo()
def foo(self):
for i in range(self.s):
yield i
if __name__== '__main__':
obj = T()
for i in obj:
print(i)
You will have exactly the same results.
Case: yield in __iter__ method without __next__ method
I don't think it is a good idea to use yield in __iter__. Ok, it works, but I think that destroys the class API.
Case: __iter__ with __next__ in custom class without yield
Use these methods (__iter__ and __next__). In the __iter__ return self and do not forget to raise StopIteration in __next__ method.
Code:
class T:
def __init__(self):
self.s = 10
def __iter__(self):
self.__i = -1
return self
def __next__(self):
while self.__i < self.s-1:
self.__i+=1
return self.__i
raise StopIteration
if __name__== '__main__':
obj = T()
for i in obj:
print(i)
|
Why does a yield from inside __next__() return generator object?
|
I am using yield to return the next value in the __next__() function in my class. However it does not return the next value, it returns the generator object.
I am trying to better understand iterators and yield. I might be doing it in the wrong way.
Have a look.
class MyString:
def __init__(self,s):
self.s=s
def __iter__(self):
return self
def __next__(self):
for i in range(len(self.s)):
yield(self.s[i])
r=MyString("abc")
i=iter(r)
print(next(i))
This returns:
generator object __next__ at 0x032C05A0
|
[
"next pretty much just calls __next__() in this case. Calling __next__ on your object will start the generator and return it (no magic is done at this point).\n\nIn this case, you might be able to get away with not defining __next__ at all:\nclass MyString:\n def __init__(self,s):\n self.s=s\n\n def __iter__(self):\n for i in range(len(self.s)):\n yield(self.s[i])\n # Or...\n # for item in self.s:\n # yield item\n\nIf you wanted to use __iter__ and __next__ (to define an iterator rather than simply making an iterable), you'd probably want to do something like this:\nclass MyString:\n def __init__(self,s):\n self.s = s\n self._ix = None\n\n def __iter__(self):\n return self\n\n def __next__(self):\n if self._ix is None:\n self._ix = 0\n\n try:\n item = self.s[self._ix]\n except IndexError:\n # Possibly reset `self._ix`?\n raise StopIteration\n self._ix += 1\n return item\n\n",
"Let's take a look at the purpose of the __next__ method. From the docs:\n\niterator.__next__()\nReturn the next item from the container. If there are no further items, raise the StopIteration exception.\n\nNow let's see what the yield statement does. Another excerpt from the docs:\n\nUsing a yield expression in a function’s body causes that function to\n be a generator\n\nAnd\n\nWhen a generator function is called, it returns an iterator known as a\n generator.\n\nNow compare __next__ and yield: __next__ returns the next item from the container. But a function containing the yield keyword returns an iterator. Consequently, using yield in a __next__ method results in an iterator that yields iterators.\n\nIf you want to use yield to make your class iterable, do it in the __iter__ method:\nclass MyString:\n def __init__(self, s):\n self.s = s\n\n def __iter__(self):\n for s in self.s:\n yield s\n\nThe __iter__ method is supposed to return an iterator - and the yield keyword makes it do exactly that.\n\nFor completeness, here is how you would implement an iterator with a __next__ method. You have to keep track of the state of the iteration, and return the corresponding value. The easiest solution is probably to increment an index every time __next__ is called:\nclass MyString:\n def __init__(self,s):\n self.s = s\n self.index = -1\n\n def __iter__(self):\n return self\n\n def __next__(self):\n self.index += 1\n\n if self.index >= len(self.s):\n raise StopIteration\n\n return self.s[self.index]\n\n",
"As far as I can tell, generator functions are just syntactic sugar for classes with a next function. Example:\n>>> def f():\n i = 0\n while True:\n i += 1\n yield i\n\n\n>>> x = f()\n>>> x\n<generator object f at 0x0000000000659938>\n>>> next(x)\n1\n>>> next(x)\n2\n>>> next(x)\n3\n>>> class g(object):\n def __init__(self):\n self.i = 0\n\n def __next__(self):\n self.i += 1\n return self.i\n\n\n>>> y = g()\n>>> y\n<__main__.g object at 0x000000000345D908>\n>>> next(y)\n1\n>>> next(y)\n2\n>>> next(y)\n3\n\nIn fact, I came here looking to see if there is any significant difference. Please shout if there is. \nSo, to answer the question, what you have is a class with a __next__ method that returns an object that also has a __next__ method. So the simplest thing to do would be to replace your yield with a return and to keep track of how far along you are, and to remember to raise a StopIteration when you reach the end of the array. So something like:\nclass MyString:\n def __init__(self,s):\n self.s=s\n self._i = -1\n\n def __iter__(self):\n return self\n\n def __next__(self):\n self._i += 1\n if self._i >= len(self.s):\n raise StopIteration\n return self.s[self._i]\n\nThat's probably the simplest way to achieve what I think you're looking for.\n",
"OBSERVATION\nIf next() function calls __next__() method, what's happening in the following example.\nCode:\nclass T:\n def __init__(self):\n self.s = 10\n \n def __iter__(self):\n for i in range(self.s):\n yield i\n \n def __next__(self):\n print('__next__ method is called.')\n \nif __name__== '__main__':\n obj = T()\n k = iter(obj)\n print(next(k)) #0\n print(next(k)) #1\n print(next(k)) #2\n print(next(k)) #3\n print(next(k)) #4\n print(next(k)) #5\n print(next(k)) #6\n print(next(k)) #7\n print(next(k)) #8\n print(next(k)) #9\n print(next(k))\n print(next(k))\n\nTerminal:\nC:...>python test.py\n0\n1\n2\n3\n4\n5\n6\n7\n8\n9\nTraceback (most recent call last):\n File \"test.py\", line 25, in <module>\n print(next(k))\nStopIteration\n\nWHAT IS HAPPENING?\nIt seams that next() function does not calling __next__ method. I cannot understand why python docs states that \"next(iterator, default) Retrieve the next item from the iterator by calling its __next__() method.\" If someonw knows, let us help!\nCase: __iter__ with __next__ in custom class with yield\nSo, if you want to use yield (in order to create a generator) with __iter__ and __next__ methods in a custom class, do not put just the yield into the __next__ method, but use it with __iter__(self) and return self.__next__() instead return self.\nCode:\nclass T:\n def __init__(self):\n self.s = 10\n \n def __iter__(self):\n return self.__next__()\n \n def __next__(self):\n for i in range(self.s):\n yield i\n \nif __name__== '__main__':\n obj = T()\n for i in obj:\n print(i)\n\nTerminal:\nC:\\...>python test.py\n0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n\nC:...>\n\nAlso, you can call from __iter__ any other method instead __next__().\nCode:\nclass T:\n def __init__(self):\n self.s = 10\n \n def __iter__(self):\n return self.foo()\n \n def foo(self):\n for i in range(self.s):\n yield i\n \nif __name__== '__main__':\n obj = T()\n for i in obj:\n print(i)\n\nYou will have exactly the same results.\nCase: yield in __iter__ method without __next__ method\nI don't think it is a good idea to use yield in __iter__. Ok, it works, but I think that destroys the class API.\nCase: __iter__ with __next__ in custom class without yield\nUse these methods (__iter__ and __next__). In the __iter__ return self and do not forget to raise StopIteration in __next__ method.\nCode:\nclass T:\n def __init__(self):\n self.s = 10\n \n def __iter__(self):\n self.__i = -1\n return self\n \n def __next__(self):\n while self.__i < self.s-1:\n self.__i+=1\n return self.__i\n raise StopIteration\n \nif __name__== '__main__':\n obj = T()\n for i in obj:\n print(i)\n\n"
] |
[
17,
13,
0,
0
] |
[] |
[] |
[
"generator",
"next",
"python"
] |
stackoverflow_0037929956_generator_next_python.txt
|
Q:
pd.read_csv gives entire data in object dtype. How do I convert to int type?
I am trying to read the a particular csv (plane-data.csv) but the entire df is in object type. I require 'year' to be in integer type so that I can perform calculations.
Please take look at my screenshot
My dataset is from plane-data.csv link
Would really love to have some help, I have been searching the entire internet for 6 hours with no progress. Thank you !
Initially, I tried
import pandas as pd
df = pd.read_csv('plane-data.csv')
columns = ['type', 'manufacturer', 'issue_date', 'model', 'status', 'aircraft_type', 'engine_type']
df.drop(columns, axis=1, inplace=True)
df.dropna(inplace=True)
df['year'] = df['year'].astype(int)
and got
ValueError: invalid literal for int() with base 10: 'None'
Which I have found to be the result of NaN values.
I have cleared all nullvalues and tried using
df['year'] = df['year'].astype(str).astype('Int64')
from other SO posts that seems to work for them not for me. I got
TypeError: object cannot be converted to an IntegerDtype
A:
You get the following error:
TypeError: 'method' object is not subscriptable
because you used [] instead of () in df['year'] = df['year'].astype[int]. You should use df['year'] = df['year'].astype(int)
A:
Since the column year contains a string value (literally None), pandas is consedering the whole column as object. You can handle that by simply setting na_values=['None'] as an argument of pandas.read_csv :
df = pd.read_csv('plane-data.csv', na_values=['None'])
Or, you can use pandas.to_numeric :
df = pd.read_csv('plane-data.csv')
df['year']= pd.to_numeric(df['year'], errors='coerce') # invalid parsing will be set as NaN
# Output :
print(df.dtypes)
tailnum object
type object
manufacturer object
issue_date object
model object
status object
aircraft_type object
engine_type object
year float64
dtype: object
|
pd.read_csv gives entire data in object dtype. How do I convert to int type?
|
I am trying to read the a particular csv (plane-data.csv) but the entire df is in object type. I require 'year' to be in integer type so that I can perform calculations.
Please take look at my screenshot
My dataset is from plane-data.csv link
Would really love to have some help, I have been searching the entire internet for 6 hours with no progress. Thank you !
Initially, I tried
import pandas as pd
df = pd.read_csv('plane-data.csv')
columns = ['type', 'manufacturer', 'issue_date', 'model', 'status', 'aircraft_type', 'engine_type']
df.drop(columns, axis=1, inplace=True)
df.dropna(inplace=True)
df['year'] = df['year'].astype(int)
and got
ValueError: invalid literal for int() with base 10: 'None'
Which I have found to be the result of NaN values.
I have cleared all nullvalues and tried using
df['year'] = df['year'].astype(str).astype('Int64')
from other SO posts that seems to work for them not for me. I got
TypeError: object cannot be converted to an IntegerDtype
|
[
"You get the following error:\nTypeError: 'method' object is not subscriptable\n\nbecause you used [] instead of () in df['year'] = df['year'].astype[int]. You should use df['year'] = df['year'].astype(int)\n",
"Since the column year contains a string value (literally None), pandas is consedering the whole column as object. You can handle that by simply setting na_values=['None'] as an argument of pandas.read_csv :\ndf = pd.read_csv('plane-data.csv', na_values=['None'])\n\nOr, you can use pandas.to_numeric :\ndf = pd.read_csv('plane-data.csv')\n\ndf['year']= pd.to_numeric(df['year'], errors='coerce') # invalid parsing will be set as NaN\n\n# Output :\nprint(df.dtypes)\n\ntailnum object\ntype object\nmanufacturer object\nissue_date object\nmodel object\nstatus object\naircraft_type object\nengine_type object\nyear float64\ndtype: object\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"dataframe",
"integer",
"object",
"pandas",
"python"
] |
stackoverflow_0074581355_dataframe_integer_object_pandas_python.txt
|
Q:
Console/Terminal interactive chosen menu with keyboard arrow
main.py:
import keyboard
import ui
import os
os.system("cls")
ui.play[ui.counter] = "> " + ui.play[ui.counter] + " <"
ui.navmenuprint(ui.play)
while True:
while ui.state == "play":
keypressed = keyboard.read_key()
while keyboard.is_pressed("down"): pass
while keyboard.is_pressed("up"): pass
while keyboard.is_pressed("enter"): pass
if keypressed == "up":
os.system("cls")
ui.navup(ui.play, ui.play2)
ui.navmenuprint(ui.play)
if keypressed == "down":
os.system("cls")
ui.navdown(ui.play, ui.play2)
ui.navmenuprint(ui.play)
if keypressed == "enter":
if ui.counter == 0:
ui.switchstate("shop")
if ui.counter == 1:
ui.switchstate("shop")
if ui.counter == 2:
ui.switchstate("shop")
if ui.counter == 3:
ui.switchstate("shop")
while ui.state == "shop":
keypressed = keyboard.read_key()
while keyboard.is_pressed("down"): pass
while keyboard.is_pressed("up"): pass
while keyboard.is_pressed("enter"): pass
if keypressed == "up":
os.system("cls")
ui.navup(ui.shop, ui.shop2)
ui.navmenuprint(ui.shop)
if keypressed == "down":
os.system("cls")
ui.navdown(ui.shop, ui.shop2)
ui.navmenuprint(ui.shop)
if keypressed == "enter":
if ui.counter == 0:
ui.switchstate("play")
if ui.counter == 1:
ui.switchstate("play")
if ui.counter == 2:
ui.switchstate("play")
if ui.counter == 3:
ui.switchstate("play")
if ui.counter == 4:
ui.switchstate("play")
ui.py:
import os
from termcolor import cprint
state = "play"
counter = 0
play = ["TOSHOP", "TOSHOP", "TOSHOP","TOSHOP"]
play2 = ["TOSHOP", "TOSHOP", "TOSHOP","TOSHOP"]
shop = ["TOPLAY", "TOPLAY","TOPLAY","TOPLAY","TOPLAY"]
shop2 = ["TOPLAY", "TOPLAY","TOPLAY","TOPLAY","TOPLAY"]
def switchstate(fromwhere):
global state, counter
if fromwhere == "play":
counter = 0
state = fromwhere
play = play2.copy()
os.system("cls")
play[counter] = "> " + play[counter] + " <"
navmenuprint(play)
if fromwhere == "shop":
counter = 0
state = fromwhere
shop = shop2.copy()
os.system("cls")
shop[counter] = "> " + shop[counter] + " <"
navmenuprint(shop)
def navup(list1, list2):
global counter
if counter != 0:
list1[counter] = list2[counter]
counter -= 1
list1[counter] = "> " + list1[counter] + " <"
else:
list1[counter] = list2[counter]
counter -= 1
list1[counter] = "> " + list1[counter] + " <"
counter = len(list1) - 1
print (counter)
def navdown(list1,list2):
global counter
if counter != len(list1) - 1:
list1[counter] = list2[counter]
counter += 1
list1[counter] = "> " + list1[counter] + " <"
else:
list1[counter] = list2[counter]
counter = 0
list1[counter] = "> " + list1[counter] + " <"
print (counter)
def navmenuprint(list):
global counter
for i in list:
print(i)
This code is an extract from my little homemade console game project, I tried to delete all unnecessary code, I successfully made a working interactive menu which means I want to achieve navigation with up and down arrow in menu and currently selected item show as > item <, handle error if list out of index, state handling (for switching screens).
Unfortunately I had to make a few ugly workaround to make this happen or I just too beginner to figure it out.
Python 3.11, I don't want to use additional modules.
The problem:
Go down to 4th item (counter variable value will be 3)
Press Enter
Go down to 5th item (counter variable value will be 4)
Press Enter
Press down
Actual:
TOSHOP
> TOSHOP <
TOSHOP
> TOSHOP <
Expected:
TOSHOP
> TOSHOP <
TOSHOP
TOSHOP
I understand my code and spent many hours to solve this issue but I have no idea why it's faulty.
I think counter variable value is good everywhere.
I make sure to reset "play" and "shop" list to original form and counter variable to 0.
A:
I had to expand global variables with lists inside switchstate function:
def switchstate(fromwhere):
global state, counter, play, play2, shop, shop2
|
Console/Terminal interactive chosen menu with keyboard arrow
|
main.py:
import keyboard
import ui
import os
os.system("cls")
ui.play[ui.counter] = "> " + ui.play[ui.counter] + " <"
ui.navmenuprint(ui.play)
while True:
while ui.state == "play":
keypressed = keyboard.read_key()
while keyboard.is_pressed("down"): pass
while keyboard.is_pressed("up"): pass
while keyboard.is_pressed("enter"): pass
if keypressed == "up":
os.system("cls")
ui.navup(ui.play, ui.play2)
ui.navmenuprint(ui.play)
if keypressed == "down":
os.system("cls")
ui.navdown(ui.play, ui.play2)
ui.navmenuprint(ui.play)
if keypressed == "enter":
if ui.counter == 0:
ui.switchstate("shop")
if ui.counter == 1:
ui.switchstate("shop")
if ui.counter == 2:
ui.switchstate("shop")
if ui.counter == 3:
ui.switchstate("shop")
while ui.state == "shop":
keypressed = keyboard.read_key()
while keyboard.is_pressed("down"): pass
while keyboard.is_pressed("up"): pass
while keyboard.is_pressed("enter"): pass
if keypressed == "up":
os.system("cls")
ui.navup(ui.shop, ui.shop2)
ui.navmenuprint(ui.shop)
if keypressed == "down":
os.system("cls")
ui.navdown(ui.shop, ui.shop2)
ui.navmenuprint(ui.shop)
if keypressed == "enter":
if ui.counter == 0:
ui.switchstate("play")
if ui.counter == 1:
ui.switchstate("play")
if ui.counter == 2:
ui.switchstate("play")
if ui.counter == 3:
ui.switchstate("play")
if ui.counter == 4:
ui.switchstate("play")
ui.py:
import os
from termcolor import cprint
state = "play"
counter = 0
play = ["TOSHOP", "TOSHOP", "TOSHOP","TOSHOP"]
play2 = ["TOSHOP", "TOSHOP", "TOSHOP","TOSHOP"]
shop = ["TOPLAY", "TOPLAY","TOPLAY","TOPLAY","TOPLAY"]
shop2 = ["TOPLAY", "TOPLAY","TOPLAY","TOPLAY","TOPLAY"]
def switchstate(fromwhere):
global state, counter
if fromwhere == "play":
counter = 0
state = fromwhere
play = play2.copy()
os.system("cls")
play[counter] = "> " + play[counter] + " <"
navmenuprint(play)
if fromwhere == "shop":
counter = 0
state = fromwhere
shop = shop2.copy()
os.system("cls")
shop[counter] = "> " + shop[counter] + " <"
navmenuprint(shop)
def navup(list1, list2):
global counter
if counter != 0:
list1[counter] = list2[counter]
counter -= 1
list1[counter] = "> " + list1[counter] + " <"
else:
list1[counter] = list2[counter]
counter -= 1
list1[counter] = "> " + list1[counter] + " <"
counter = len(list1) - 1
print (counter)
def navdown(list1,list2):
global counter
if counter != len(list1) - 1:
list1[counter] = list2[counter]
counter += 1
list1[counter] = "> " + list1[counter] + " <"
else:
list1[counter] = list2[counter]
counter = 0
list1[counter] = "> " + list1[counter] + " <"
print (counter)
def navmenuprint(list):
global counter
for i in list:
print(i)
This code is an extract from my little homemade console game project, I tried to delete all unnecessary code, I successfully made a working interactive menu which means I want to achieve navigation with up and down arrow in menu and currently selected item show as > item <, handle error if list out of index, state handling (for switching screens).
Unfortunately I had to make a few ugly workaround to make this happen or I just too beginner to figure it out.
Python 3.11, I don't want to use additional modules.
The problem:
Go down to 4th item (counter variable value will be 3)
Press Enter
Go down to 5th item (counter variable value will be 4)
Press Enter
Press down
Actual:
TOSHOP
> TOSHOP <
TOSHOP
> TOSHOP <
Expected:
TOSHOP
> TOSHOP <
TOSHOP
TOSHOP
I understand my code and spent many hours to solve this issue but I have no idea why it's faulty.
I think counter variable value is good everywhere.
I make sure to reset "play" and "shop" list to original form and counter variable to 0.
|
[
"I had to expand global variables with lists inside switchstate function:\ndef switchstate(fromwhere):\n global state, counter, play, play2, shop, shop2\n\n"
] |
[
0
] |
[] |
[] |
[
"console",
"python",
"python_3.x",
"terminal"
] |
stackoverflow_0074578818_console_python_python_3.x_terminal.txt
|
Q:
Counting the keys in a nested dictionary and create a list containing the count under each keys
I've the following dictionary
{
"Africa":{
"All":{"ABC":0,"DEF":0,"GHI":0},
"NA":{"GHI":0},
"EXPORT":{"ABC":0,"DEF":0,"GHI":0},
"RE-EXPORT":{"ABC":0,"DEF":0,"GHI":0}
},
"Asia":{
"All":{"ABC":0,"DEF":0,"GHI":0},
"NA":{"ABC":0,"DEF":0},
"RE-EXPORT":{"ABC":0,"GHI":0}
},
"Australia":{
"All":{"DEF":0,"GHI":0},
"NA":{"ABC":0,"DEF":0,"GHI":0}
}
}
I have the following list
x=[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22]
I need to group the list x as following, based on the nested keys count
result = [
[
[1,2,3],
[4],
[5,6,7],
[8,9,10]
],
[
[11,12,13],
[14,15],
[16,17]
],
[
[18,19],
[20,21,22]
]
]
I've 3 parent keys(Africa,Asia,Australia) so the result will have 3 lists inside a main list
Inside Africa 4 keys, so [[[],[],[],[]] and next under All I've 3 keys, so [[[[1],[2],[3]],[],[],[]]
It's basically grouping the values based on nested dictionary keys
I tried with recursion but couldn't achieve this
A:
you can use:
start=0
v1=[]
v2=[]
for i in a: # a= dictionary
for j in list(a[i].keys()):
mask=list(a[i][j].keys())
leng=len(mask)
mask[0:leng]=x[start:start+ leng]
start+=leng
v1.append(mask)
v2.append(v1)
v1=[]
print(v2)
'''
[
[[1, 2, 3], [4], [5, 6, 7], [8, 9, 10]],
[[11, 12, 13], [14, 15], [16, 17]],
[[18, 19], [20, 21, 22]],
]
'''
|
Counting the keys in a nested dictionary and create a list containing the count under each keys
|
I've the following dictionary
{
"Africa":{
"All":{"ABC":0,"DEF":0,"GHI":0},
"NA":{"GHI":0},
"EXPORT":{"ABC":0,"DEF":0,"GHI":0},
"RE-EXPORT":{"ABC":0,"DEF":0,"GHI":0}
},
"Asia":{
"All":{"ABC":0,"DEF":0,"GHI":0},
"NA":{"ABC":0,"DEF":0},
"RE-EXPORT":{"ABC":0,"GHI":0}
},
"Australia":{
"All":{"DEF":0,"GHI":0},
"NA":{"ABC":0,"DEF":0,"GHI":0}
}
}
I have the following list
x=[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22]
I need to group the list x as following, based on the nested keys count
result = [
[
[1,2,3],
[4],
[5,6,7],
[8,9,10]
],
[
[11,12,13],
[14,15],
[16,17]
],
[
[18,19],
[20,21,22]
]
]
I've 3 parent keys(Africa,Asia,Australia) so the result will have 3 lists inside a main list
Inside Africa 4 keys, so [[[],[],[],[]] and next under All I've 3 keys, so [[[[1],[2],[3]],[],[],[]]
It's basically grouping the values based on nested dictionary keys
I tried with recursion but couldn't achieve this
|
[
"you can use:\nstart=0\nv1=[]\nv2=[]\nfor i in a: # a= dictionary\n for j in list(a[i].keys()):\n mask=list(a[i][j].keys())\n leng=len(mask)\n mask[0:leng]=x[start:start+ leng]\n start+=leng\n v1.append(mask)\n v2.append(v1)\n v1=[]\n\nprint(v2)\n'''\n[\n [[1, 2, 3], [4], [5, 6, 7], [8, 9, 10]],\n [[11, 12, 13], [14, 15], [16, 17]],\n [[18, 19], [20, 21, 22]],\n]\n\n'''\n\n"
] |
[
0
] |
[] |
[] |
[
"dictionary",
"grouping",
"json",
"python",
"recursion"
] |
stackoverflow_0074581221_dictionary_grouping_json_python_recursion.txt
|
Q:
I am unable to use np.concatenate
I have 2 variables from polynomial regression:
y_test = [1.57325397 0.72686416]
y_pred= [1.57325397 0.72686416]
y_test is the y axis of the test i did, while y_pred is are the values i got from regressor.predict (regressor is the object of LinearRegression class).
I tried to use np.concatenate((y_test),(y_predict)) but it did not work and it said only integer scalar arrays can be converted to a scalar index. So what should I do here? it OK to round of the values to integers or should I do something else?
A:
You should first separate your list values with a comma:
y_test = [1.57325397,0.72686416]
y_pred= [1.57325397,0.72686416]
For concatenation you should define an axis and use following syntax:
np.concatenate((y_test, y_pred), axis=0)
Then you would get
array([1.57325397, 0.72686416, 1.57325397, 0.72686416])
|
I am unable to use np.concatenate
|
I have 2 variables from polynomial regression:
y_test = [1.57325397 0.72686416]
y_pred= [1.57325397 0.72686416]
y_test is the y axis of the test i did, while y_pred is are the values i got from regressor.predict (regressor is the object of LinearRegression class).
I tried to use np.concatenate((y_test),(y_predict)) but it did not work and it said only integer scalar arrays can be converted to a scalar index. So what should I do here? it OK to round of the values to integers or should I do something else?
|
[
"You should first separate your list values with a comma:\ny_test = [1.57325397,0.72686416]\ny_pred= [1.57325397,0.72686416]\n\nFor concatenation you should define an axis and use following syntax:\nnp.concatenate((y_test, y_pred), axis=0)\n\nThen you would get\narray([1.57325397, 0.72686416, 1.57325397, 0.72686416])\n\n"
] |
[
1
] |
[] |
[] |
[
"data_science",
"linear_regression",
"np",
"python"
] |
stackoverflow_0074581517_data_science_linear_regression_np_python.txt
|
Q:
How can I save the texts I have extracted with OCR from different images in multiple .txt files
I made an OCR program using the Python programming language and the tesserOCR library. In the program I have made, I scan all the pictures in a folder and extract the texts in them. But these extracted texts are saved in a single .txt file. How can I save the texts in each image to different .txt files. That is, the texts in each image should be saved as a .txt file named after that image.
`
import tesserocr
from PIL import Image
import glob
import time
import cv2
import numpy as np
Image.MAX_IMAGE_PIXELS = None
api = tesserocr.PyTessBaseAPI(path='D:/Anaconda/Tesseract5/tessdata', lang='tur')
files = glob.glob('C:/Users/Casper/Desktop/OCR/wpp/*')
filesProcessed = []
def extract():
for f, file in enumerate(files):
if f >= 0:
try:
text = ' '
jpegs = glob.glob('C:/Users/Casper/Desktop/OCR/wpp/*')
jpegs = sorted(jpegs)
print(len(jpegs))
for i in jpegs:
pil_image = Image.open(i)
api.SetImage(pil_image)
text = text + api.GetUTF8Text()
filename = file[:-4] + '.txt'
with open(filename, 'w') as n:
n.write(text)
except:
print(f'{file} is a corrupt file')
break
if __name__ == "__main__":
extract()
`
Texts from all images are saved in the same .txt file. I want it to be saved in different .txt file.
A:
I ran a version of your extract function where I removed all the stuff unrelated to writing to a file, and it writes a file for every single file in files.
def extract():
from os.path import splitext
for file in files:
try:
with open(splitext(file)[0] + ".txt", 'w') as n:
n.write(" ")
except:
print(f'{file} is a corrupt file')
break
A:
I fixed the problem. Currently the texts in all images are saved in different .txt files.
def extract():
jpegs = glob.glob('C:/Users/Casper/Desktop/OCR/wpp/*')
jpegs = sorted(jpegs)
print(len(jpegs))
n = len(jpegs)
for i in range(0, n):
img = Image.open(jpegs[i])
api.SetImage(img)
text = api.GetUTF8Text()
print(i + 1)
filename = open(f'wpptext/text{i + 1}.txt', "+w")
filename.write(text)
|
How can I save the texts I have extracted with OCR from different images in multiple .txt files
|
I made an OCR program using the Python programming language and the tesserOCR library. In the program I have made, I scan all the pictures in a folder and extract the texts in them. But these extracted texts are saved in a single .txt file. How can I save the texts in each image to different .txt files. That is, the texts in each image should be saved as a .txt file named after that image.
`
import tesserocr
from PIL import Image
import glob
import time
import cv2
import numpy as np
Image.MAX_IMAGE_PIXELS = None
api = tesserocr.PyTessBaseAPI(path='D:/Anaconda/Tesseract5/tessdata', lang='tur')
files = glob.glob('C:/Users/Casper/Desktop/OCR/wpp/*')
filesProcessed = []
def extract():
for f, file in enumerate(files):
if f >= 0:
try:
text = ' '
jpegs = glob.glob('C:/Users/Casper/Desktop/OCR/wpp/*')
jpegs = sorted(jpegs)
print(len(jpegs))
for i in jpegs:
pil_image = Image.open(i)
api.SetImage(pil_image)
text = text + api.GetUTF8Text()
filename = file[:-4] + '.txt'
with open(filename, 'w') as n:
n.write(text)
except:
print(f'{file} is a corrupt file')
break
if __name__ == "__main__":
extract()
`
Texts from all images are saved in the same .txt file. I want it to be saved in different .txt file.
|
[
"I ran a version of your extract function where I removed all the stuff unrelated to writing to a file, and it writes a file for every single file in files.\ndef extract():\n from os.path import splitext\n for file in files:\n try:\n with open(splitext(file)[0] + \".txt\", 'w') as n:\n n.write(\" \")\n except:\n print(f'{file} is a corrupt file')\n break\n\n",
"I fixed the problem. Currently the texts in all images are saved in different .txt files.\ndef extract():\n\njpegs = glob.glob('C:/Users/Casper/Desktop/OCR/wpp/*')\njpegs = sorted(jpegs)\nprint(len(jpegs))\nn = len(jpegs)\n\nfor i in range(0, n):\n img = Image.open(jpegs[i])\n api.SetImage(img)\n text = api.GetUTF8Text()\n print(i + 1)\n filename = open(f'wpptext/text{i + 1}.txt', \"+w\")\n filename.write(text)\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"ocr",
"python",
"python_tesseract",
"tesseract"
] |
stackoverflow_0074573071_ocr_python_python_tesseract_tesseract.txt
|
Q:
How do I combine similar dates based on a particular value?
Trade Date Options Class Underlying Product Type Volume
0 2022-01-03 A A S 14
1 2022-01-03 A A S 3
2 2022-01-03 A A S 42
3 2022-01-03 A A S 10
4 2022-01-03 AA AA S 1924
print(df.groupby('Trade Date','Underlying').sum())
How do combine all the similar dates together based on a particular underlying?
For example in the above example i will get a single line of 2022-01-03 for A with the sum of its volume
I tried using:
print(df.groupby('Trade Date','Underlying').sum())
A:
df.groupby(['Trade Date', 'Underlying'])['Volume'].sum()
output:
> Trade Date Underlying
> 2022-01-03 A 69
> AA 1924
> Name: Volume, dtype: int64
A:
You're close, you can in fact use GroupBy.sum but you need to put the groups/columns inside square brackets instead of parenthesis. Also, you need to select only columns which should be valid for the sum (Volume in your case), otherwise you'll get a warning :
FutureWarning: The default value of numeric_only in
DataFrameGroupBy.sum is deprecated. In a future version, numeric_only
will default to False. Either specify numeric_only or select only
columns which should be valid for the function.
Try this :
out= df.groupby(["Trade Date", "Underlying"], as_index=False)["Volume"].sum()
Or this :
out= df.groupby(["Trade Date", "Underlying"], as_index=False).sum(numeric_only=True)
# Output :
print(out)
Trade Date Underlying Volume
0 2022-01-03 A 69
1 2022-01-03 AA 1924
|
How do I combine similar dates based on a particular value?
|
Trade Date Options Class Underlying Product Type Volume
0 2022-01-03 A A S 14
1 2022-01-03 A A S 3
2 2022-01-03 A A S 42
3 2022-01-03 A A S 10
4 2022-01-03 AA AA S 1924
print(df.groupby('Trade Date','Underlying').sum())
How do combine all the similar dates together based on a particular underlying?
For example in the above example i will get a single line of 2022-01-03 for A with the sum of its volume
I tried using:
print(df.groupby('Trade Date','Underlying').sum())
|
[
"df.groupby(['Trade Date', 'Underlying'])['Volume'].sum()\n\noutput:\n> Trade Date Underlying\n> 2022-01-03 A 69\n> AA 1924\n> Name: Volume, dtype: int64\n\n",
"You're close, you can in fact use GroupBy.sum but you need to put the groups/columns inside square brackets instead of parenthesis. Also, you need to select only columns which should be valid for the sum (Volume in your case), otherwise you'll get a warning :\n\nFutureWarning: The default value of numeric_only in\nDataFrameGroupBy.sum is deprecated. In a future version, numeric_only\nwill default to False. Either specify numeric_only or select only\ncolumns which should be valid for the function.\n\nTry this :\nout= df.groupby([\"Trade Date\", \"Underlying\"], as_index=False)[\"Volume\"].sum()\n\nOr this :\nout= df.groupby([\"Trade Date\", \"Underlying\"], as_index=False).sum(numeric_only=True)\n\n# Output :\nprint(out)\n\n Trade Date Underlying Volume\n0 2022-01-03 A 69\n1 2022-01-03 AA 1924\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074581568_python.txt
|
Q:
Struggling to make a nested json API ouput into a pandas df
I am working with json data for the first time in Python (API output). I am struggling a bit to understand how to convert the following results to a pandas-like dataframe:
{'coord': {'lon': 13.4105, 'lat': 52.5244}, 'weather': [{'id': 801, 'main': 'Clouds', 'description': 'few clouds', 'icon': '02d'}], 'base': 'stations', 'main': {'temp': 3.21, 'feels_like': -1.29, 'temp_min': 2.22, 'temp_max': 4.09, 'pressure': 1007, 'humidity': 91}, 'visibility': 10000, 'wind': {'speed': 5.81, 'deg': 119, 'gust': 7.15}, 'clouds': {'all': 20}, 'dt': 1669622280, 'sys': {'type': 2, 'id': 2011538, 'country': 'DE', 'sunrise': 1669618193, 'sunset': 1669647541}, 'timezone': 3600, 'id': 2950159, 'name': 'Berlin', 'cod': 200}
Specifically, I would like that the data looked a bit like this:
Any tip would be greatly appreciated. Thank you.
I tried pd.readDataframe, pd.read_json
A:
Note: Do not share your data as a picture in your next questions. If you share it as a text, we can easily copy and paste it. You can use this for now.
a={'coord': {'lon': 13.4105, 'lat': 52.5244}, 'weather': [{'id': 741, 'main': 'Fog', 'description': 'fog', 'icon': '50d'}], 'bas e': 'stations', 'main': {'temp': 5.95, 'feels like': 4.42, 'temp_min': 4.98, 'temp_max': 8.32, 'pressure': 1006, 'humidity': 93}, 'visibility': 750, 'wind': {'speed': 2.06, 'deg': 70}, 'clouds': {'all': 20}, 'dt': 1669385464, 'sys': {'type': 2, 'id': 2011538, 'country': 'DE', 'sunrise': 1669358707, 'sunset': 1669388510}, 'timezone': 3600, 'id': 2950159, 'name': 'Berlin', 'cod': 200}
df=pd.json_normalize(a,record_path='weather',record_prefix='weather_').join(pd.json_normalize(a)).drop(['weather'],axis=1)
print(df)
'''
| | weather_id | weather_main | weather_description | weather_icon | bas e | visibility | dt | timezone | id | name | cod | coord.lon | coord.lat | main.temp | main.feels like | main.temp_min | main.temp_max | main.pressure | main.humidity | wind.speed | wind.deg | clouds.all | sys.type | sys.id | sys.country | sys.sunrise | sys.sunset |
|---:|-------------:|:---------------|:----------------------|:---------------|:---------|-------------:|-----------:|-----------:|--------:|:-------|------:|------------:|------------:|------------:|------------------:|----------------:|----------------:|----------------:|----------------:|-------------:|-----------:|-------------:|-----------:|---------:|:--------------|--------------:|-------------:|
| 0 | 741 | Fog | fog | 50d | stations | 750 | 1669385464 | 3600 | 2950159 | Berlin | 200 | 13.4105 | 52.5244 | 5.95 | 4.42 | 4.98 | 8.32 | 1006 | 93 | 2.06 | 70 | 20 | 2 | 2011538 | DE | 1669358707 | 1669388510 |
'''
|
Struggling to make a nested json API ouput into a pandas df
|
I am working with json data for the first time in Python (API output). I am struggling a bit to understand how to convert the following results to a pandas-like dataframe:
{'coord': {'lon': 13.4105, 'lat': 52.5244}, 'weather': [{'id': 801, 'main': 'Clouds', 'description': 'few clouds', 'icon': '02d'}], 'base': 'stations', 'main': {'temp': 3.21, 'feels_like': -1.29, 'temp_min': 2.22, 'temp_max': 4.09, 'pressure': 1007, 'humidity': 91}, 'visibility': 10000, 'wind': {'speed': 5.81, 'deg': 119, 'gust': 7.15}, 'clouds': {'all': 20}, 'dt': 1669622280, 'sys': {'type': 2, 'id': 2011538, 'country': 'DE', 'sunrise': 1669618193, 'sunset': 1669647541}, 'timezone': 3600, 'id': 2950159, 'name': 'Berlin', 'cod': 200}
Specifically, I would like that the data looked a bit like this:
Any tip would be greatly appreciated. Thank you.
I tried pd.readDataframe, pd.read_json
|
[
"Note: Do not share your data as a picture in your next questions. If you share it as a text, we can easily copy and paste it. You can use this for now.\na={'coord': {'lon': 13.4105, 'lat': 52.5244}, 'weather': [{'id': 741, 'main': 'Fog', 'description': 'fog', 'icon': '50d'}], 'bas e': 'stations', 'main': {'temp': 5.95, 'feels like': 4.42, 'temp_min': 4.98, 'temp_max': 8.32, 'pressure': 1006, 'humidity': 93}, 'visibility': 750, 'wind': {'speed': 2.06, 'deg': 70}, 'clouds': {'all': 20}, 'dt': 1669385464, 'sys': {'type': 2, 'id': 2011538, 'country': 'DE', 'sunrise': 1669358707, 'sunset': 1669388510}, 'timezone': 3600, 'id': 2950159, 'name': 'Berlin', 'cod': 200}\n\n\ndf=pd.json_normalize(a,record_path='weather',record_prefix='weather_').join(pd.json_normalize(a)).drop(['weather'],axis=1)\nprint(df)\n'''\n| | weather_id | weather_main | weather_description | weather_icon | bas e | visibility | dt | timezone | id | name | cod | coord.lon | coord.lat | main.temp | main.feels like | main.temp_min | main.temp_max | main.pressure | main.humidity | wind.speed | wind.deg | clouds.all | sys.type | sys.id | sys.country | sys.sunrise | sys.sunset |\n|---:|-------------:|:---------------|:----------------------|:---------------|:---------|-------------:|-----------:|-----------:|--------:|:-------|------:|------------:|------------:|------------:|------------------:|----------------:|----------------:|----------------:|----------------:|-------------:|-----------:|-------------:|-----------:|---------:|:--------------|--------------:|-------------:|\n| 0 | 741 | Fog | fog | 50d | stations | 750 | 1669385464 | 3600 | 2950159 | Berlin | 200 | 13.4105 | 52.5244 | 5.95 | 4.42 | 4.98 | 8.32 | 1006 | 93 | 2.06 | 70 | 20 | 2 | 2011538 | DE | 1669358707 | 1669388510 |\n'''\n\n"
] |
[
1
] |
[] |
[] |
[
"json",
"pandas",
"python"
] |
stackoverflow_0074574216_json_pandas_python.txt
|
Q:
Sprite shadow changing to full black
player.png
shadow comparison
The shadows are different when I blit the player image to a surface and then loading that surface to the display vs loading the entire image on the display
import pygame
pygame.init()
display = pygame.display.set_mode((1280, 736))
display.fill('#555358')
clock = pygame.time.Clock()
if __name__ == '__main__':
image_1 = pygame.Surface((16, 16)).convert_alpha()
image_1.blit(
pygame.image.load('player.png').convert_alpha(),
(0, 0),
(16, 32, 16, 16))
image = pygame.transform.scale(image_1, (16 * 3, 16 * 3))
image.set_colorkey((0, 0, 0))
display.blit(image, (0, 96))
image_2 = pygame.image.load('player.png').convert_alpha()
image_2 = pygame.transform.scale(image_2, (288 * 3, 240 * 3))
display.blit(image_2, (0, 0))
while True:
# Process player inputs.
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
raise SystemExit
pygame.display.flip()
clock.tick(60)
I thought setting the color key was messing with it, so I tried removing it to no avail
A:
You need to create a surface with an alpha channel (pygame.SRCALPHA) instead of converting it with convert_alpha and setting a color key with set_colorkey:
image_1 = pygame.Surface((16, 16), pygame.SRCALPHA)
image_1.blit(
pygame.image.load('player.png').convert_alpha(),
(0, 0),
(16, 32, 16, 16))
image = pygame.transform.scale(image_1, (16 * 3, 16 * 3))
display.blit(image, (0, 96))
Note: pygame.Surface((16, 16)) creates a completely black surface. In contrast, pygame.Surface((16, 16), pygame.SRCALPHA) creates a completely transparent surface. convert_alpha() changes the format of the image, but it remains solid black. Also see How to make a surface with a transparent background in pygame.
|
Sprite shadow changing to full black
|
player.png
shadow comparison
The shadows are different when I blit the player image to a surface and then loading that surface to the display vs loading the entire image on the display
import pygame
pygame.init()
display = pygame.display.set_mode((1280, 736))
display.fill('#555358')
clock = pygame.time.Clock()
if __name__ == '__main__':
image_1 = pygame.Surface((16, 16)).convert_alpha()
image_1.blit(
pygame.image.load('player.png').convert_alpha(),
(0, 0),
(16, 32, 16, 16))
image = pygame.transform.scale(image_1, (16 * 3, 16 * 3))
image.set_colorkey((0, 0, 0))
display.blit(image, (0, 96))
image_2 = pygame.image.load('player.png').convert_alpha()
image_2 = pygame.transform.scale(image_2, (288 * 3, 240 * 3))
display.blit(image_2, (0, 0))
while True:
# Process player inputs.
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
raise SystemExit
pygame.display.flip()
clock.tick(60)
I thought setting the color key was messing with it, so I tried removing it to no avail
|
[
"You need to create a surface with an alpha channel (pygame.SRCALPHA) instead of converting it with convert_alpha and setting a color key with set_colorkey:\nimage_1 = pygame.Surface((16, 16), pygame.SRCALPHA)\nimage_1.blit(\n pygame.image.load('player.png').convert_alpha(),\n (0, 0),\n (16, 32, 16, 16))\nimage = pygame.transform.scale(image_1, (16 * 3, 16 * 3))\ndisplay.blit(image, (0, 96))\n\nNote: pygame.Surface((16, 16)) creates a completely black surface. In contrast, pygame.Surface((16, 16), pygame.SRCALPHA) creates a completely transparent surface. convert_alpha() changes the format of the image, but it remains solid black. Also see How to make a surface with a transparent background in pygame.\n"
] |
[
1
] |
[] |
[] |
[
"pygame",
"pygame_surface",
"python",
"python_3.x"
] |
stackoverflow_0074581665_pygame_pygame_surface_python_python_3.x.txt
|
Q:
Asyncio lock acquire task at end of event loop
Consider the code below
import asyncio
async def waiter2(lock):
print('2 waiting for it ...')
async with lock:
print('2 ... got it!')
async def waiter(lock):
print('waiting for it ...')
async with lock:
print('... got it!')
async def main():
lock = asyncio.Lock()
await lock.acquire()
waiter_task = asyncio.create_task(waiter(lock))
waiter_task = asyncio.create_task(waiter2(lock))
await asyncio.sleep(2)
lock.release()
print("released")
asyncio.run(main())
When executed, this is the output:
waiting for it ...
2 waiting for it ...
released
... got it!
after the main function ends, the event loop only cares to finish the first waiter, but the lock acquire condition for the second waiter are met by the end of the first waiter, what is the technical reason behind this?
A:
You need to await for the tasks to finish. In asyncio only one task is running at time. So when you release the lock your main function and the whole program will finish without switching to waiter2 task.
import asyncio
async def waiter2(lock):
print("2 waiting for it ...")
async with lock:
print("2 ... got it!")
async def waiter(lock):
print("waiting for it ...")
async with lock:
print("... got it!")
async def main():
lock = asyncio.Lock()
await lock.acquire()
waiter_task_1 = asyncio.create_task(waiter(lock))
waiter_task_2 = asyncio.create_task(waiter2(lock))
await asyncio.sleep(2)
lock.release()
print("released")
await waiter_task_1 # <--- wait for waiter_task_1 to finish
await waiter_task_2 # <--- wait for waiter_task_2 to finish
asyncio.run(main())
Prints:
waiting for it ...
2 waiting for it ...
released
... got it!
2 ... got it!
|
Asyncio lock acquire task at end of event loop
|
Consider the code below
import asyncio
async def waiter2(lock):
print('2 waiting for it ...')
async with lock:
print('2 ... got it!')
async def waiter(lock):
print('waiting for it ...')
async with lock:
print('... got it!')
async def main():
lock = asyncio.Lock()
await lock.acquire()
waiter_task = asyncio.create_task(waiter(lock))
waiter_task = asyncio.create_task(waiter2(lock))
await asyncio.sleep(2)
lock.release()
print("released")
asyncio.run(main())
When executed, this is the output:
waiting for it ...
2 waiting for it ...
released
... got it!
after the main function ends, the event loop only cares to finish the first waiter, but the lock acquire condition for the second waiter are met by the end of the first waiter, what is the technical reason behind this?
|
[
"You need to await for the tasks to finish. In asyncio only one task is running at time. So when you release the lock your main function and the whole program will finish without switching to waiter2 task.\nimport asyncio\n\n\nasync def waiter2(lock):\n print(\"2 waiting for it ...\")\n async with lock:\n print(\"2 ... got it!\")\n\n\nasync def waiter(lock):\n print(\"waiting for it ...\")\n async with lock:\n print(\"... got it!\")\n\n\nasync def main():\n lock = asyncio.Lock()\n await lock.acquire()\n\n waiter_task_1 = asyncio.create_task(waiter(lock))\n waiter_task_2 = asyncio.create_task(waiter2(lock))\n\n await asyncio.sleep(2)\n\n lock.release()\n print(\"released\")\n\n await waiter_task_1 # <--- wait for waiter_task_1 to finish\n await waiter_task_2 # <--- wait for waiter_task_2 to finish\n\n\nasyncio.run(main())\n\nPrints:\nwaiting for it ...\n2 waiting for it ...\nreleased\n... got it!\n2 ... got it!\n\n"
] |
[
1
] |
[] |
[] |
[
"asynchronous",
"python",
"python_asyncio"
] |
stackoverflow_0074581494_asynchronous_python_python_asyncio.txt
|
Q:
List matches of page.search_for() with PyMuPDF
I'm writing a script to highlight text from a list of quotes in a PDF. The quotes are in the list text_list. I use this code to highlight the text in the PDF:
import fitz
#Load Document
doc = fitz.open(filename)
#Iterate over pages
for page in doc:
# iterate through each text using for loop and annotate
for i, text in enumerate(text_list):
rl = page.search_for(text, quads = True)
page.add_highlight_annot(rl)
# Print how many results were found
print(str(i) + " instances highlighted in pdf")
I now want to get a list of the quotes that were not found and highlighted and was wondering if there is any simple way to get a list of the matches page.search_for found (or of those quotes it didn't find).
A:
The list of hit rectangles / quads rl will be empty if nothing was found.
I suggest you check if rl == []: and depend adding highlights on this as well as adding the respective text to some no_hit list.
Probably better the other way round:
Your text list better should be a Python set. If a text was ever found put it in another, found_set. At end of processing subtract (set difference) the found set from text_list set.
|
List matches of page.search_for() with PyMuPDF
|
I'm writing a script to highlight text from a list of quotes in a PDF. The quotes are in the list text_list. I use this code to highlight the text in the PDF:
import fitz
#Load Document
doc = fitz.open(filename)
#Iterate over pages
for page in doc:
# iterate through each text using for loop and annotate
for i, text in enumerate(text_list):
rl = page.search_for(text, quads = True)
page.add_highlight_annot(rl)
# Print how many results were found
print(str(i) + " instances highlighted in pdf")
I now want to get a list of the quotes that were not found and highlighted and was wondering if there is any simple way to get a list of the matches page.search_for found (or of those quotes it didn't find).
|
[
"The list of hit rectangles / quads rl will be empty if nothing was found.\nI suggest you check if rl == []: and depend adding highlights on this as well as adding the respective text to some no_hit list.\nProbably better the other way round:\nYour text list better should be a Python set. If a text was ever found put it in another, found_set. At end of processing subtract (set difference) the found set from text_list set.\n"
] |
[
2
] |
[] |
[] |
[
"pymupdf",
"python"
] |
stackoverflow_0074581135_pymupdf_python.txt
|
Q:
How to %run a list of notebooks in Databricks
I'd like to %run a list of notebooks from another Databricks notebook.
my_notebooks = ["./setup", "./do_the_main_thing", "./check_results"]
for notebook in my_notebooks:
%run notebook
This doesn't work ofcourse.
I don't want to use dbutils.notebook.run() as this creates new jobs and doesn't return anything back - I want everything executable and queryable from the main notebook.
I thought perhaps it might be possible to import the actual module and run the function.
?%run shows the command points to IPython/core/magics/execution.py
and run is a method of the class ExecutionMagics in the module execution.
So perhaps, I could use execution.ExecutionMagic.run() if I created an instance of the class.
But it's beyond me - tricky and I'm doubting it's an effective solution.
How can this be done?
Am I really stuck with:-
%run ./a notebook
%run ./another_notebook
%run ./yet_another_hardcoded_notebook_name
Eternally grateful for any help!
A:
Unfortunately it's not possible to do - %run doesn't allow to pass notebook name as a variable (see this answer with more details, and possible workaround).
Another approach would be to use so-called arbitrary files in repos functionality - if you define code as a Python file instead of notebook, then you'll be able to use it as normal Python module, and even load it dynamically if you need.
|
How to %run a list of notebooks in Databricks
|
I'd like to %run a list of notebooks from another Databricks notebook.
my_notebooks = ["./setup", "./do_the_main_thing", "./check_results"]
for notebook in my_notebooks:
%run notebook
This doesn't work ofcourse.
I don't want to use dbutils.notebook.run() as this creates new jobs and doesn't return anything back - I want everything executable and queryable from the main notebook.
I thought perhaps it might be possible to import the actual module and run the function.
?%run shows the command points to IPython/core/magics/execution.py
and run is a method of the class ExecutionMagics in the module execution.
So perhaps, I could use execution.ExecutionMagic.run() if I created an instance of the class.
But it's beyond me - tricky and I'm doubting it's an effective solution.
How can this be done?
Am I really stuck with:-
%run ./a notebook
%run ./another_notebook
%run ./yet_another_hardcoded_notebook_name
Eternally grateful for any help!
|
[
"Unfortunately it's not possible to do - %run doesn't allow to pass notebook name as a variable (see this answer with more details, and possible workaround).\nAnother approach would be to use so-called arbitrary files in repos functionality - if you define code as a Python file instead of notebook, then you'll be able to use it as normal Python module, and even load it dynamically if you need.\n"
] |
[
0
] |
[] |
[] |
[
"databricks",
"ipython",
"python"
] |
stackoverflow_0074518979_databricks_ipython_python.txt
|
Q:
Python change diction value from key in a string
I have successfully used recursion to find a key to a variable I want to change in an API reponse json.
The recursion returns the key the equivilant is like this:
obj_key = "obj['key1']['key2'][1]['key3'][4]['key4'][0]"
if I eval this:
eval(obj_key)
I get the value no problem.
Now I want to change the value if it isn't what I want it to be. I can't figure this out and only get syntax error with every attempt .... all attempts are some form of:
eval(obj_key + ' = "my_new_value"')
I have slept on this one thinking it would come to me in a day or two (sometimes this works) but alas no epiphany for me. Thanks for any help!
A:
Using exec instead of eval seems solving this problem:
eval("y=12") #SyntaxError: invalid syntax
But replacing it with exec:
exec("y=12")
print(y) #12
A:
Instead of using eval, you could keep a list of keys and a reference to the object. So instead of building a string
"obj['key1']['key2'][1]['key3'][4]['key4'][0]"
You'd just need
obj
and
['key1', 'key2', 1, 'key3', 4, 'key4', 0]
Which is probably easier to create and manipulate in your Python code.
To use this to get the value, you could write a function:
def my_get(obj, keys): # TODO better name
for key in keys:
obj = obj[key]
return obj
And to set a value:
def my_set(obj, keys, value): # TODO better name
last_obj = my_get(obj, keys[:-1]) # get the last dict or list, by skipping the last key
last_obj[keys[-1]] = value # now set the value using the last key
|
Python change diction value from key in a string
|
I have successfully used recursion to find a key to a variable I want to change in an API reponse json.
The recursion returns the key the equivilant is like this:
obj_key = "obj['key1']['key2'][1]['key3'][4]['key4'][0]"
if I eval this:
eval(obj_key)
I get the value no problem.
Now I want to change the value if it isn't what I want it to be. I can't figure this out and only get syntax error with every attempt .... all attempts are some form of:
eval(obj_key + ' = "my_new_value"')
I have slept on this one thinking it would come to me in a day or two (sometimes this works) but alas no epiphany for me. Thanks for any help!
|
[
"Using exec instead of eval seems solving this problem:\neval(\"y=12\") #SyntaxError: invalid syntax\n\nBut replacing it with exec:\nexec(\"y=12\")\nprint(y) #12\n\n",
"Instead of using eval, you could keep a list of keys and a reference to the object. So instead of building a string\n\"obj['key1']['key2'][1]['key3'][4]['key4'][0]\"\n\nYou'd just need\nobj\n\nand\n['key1', 'key2', 1, 'key3', 4, 'key4', 0]\n\nWhich is probably easier to create and manipulate in your Python code.\nTo use this to get the value, you could write a function:\ndef my_get(obj, keys): # TODO better name\n for key in keys:\n obj = obj[key]\n return obj\n\nAnd to set a value:\ndef my_set(obj, keys, value): # TODO better name\n last_obj = my_get(obj, keys[:-1]) # get the last dict or list, by skipping the last key\n last_obj[keys[-1]] = value # now set the value using the last key\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"dictionary",
"object",
"python",
"string"
] |
stackoverflow_0074581669_dictionary_object_python_string.txt
|
Q:
Qt - update view size on delegate sizeHint change
I have a QTreeView with a QStyledItemDelegate inside of it. When a certain action occurs to the delegate, its size is supposed to change. However I haven't figured out how to get the QTreeView's rows to resize in response to the delegate's editor size changing. I tried QTreeView.updateGeometry and QTreeView.repaint and a couple other things but it doesn't seem to work. Could someone point me in the right direction?
Here's a minimal reproduction (note: The code is hacky in a few places, it's just meant to be a demonstration of the problem, not a demonstration of good MVC).
Steps:
Run the code below
Press either "Add a label" button
Note that the height of the row in the QTreeView does not change no matter how many times either button is clicked.
from PySide2 import QtCore, QtWidgets
_VALUE = 100
class _Clicker(QtWidgets.QWidget):
clicked = QtCore.Signal()
def __init__(self, parent=None):
super(_Clicker, self).__init__(parent=parent)
self.setLayout(QtWidgets.QVBoxLayout())
self._button = QtWidgets.QPushButton("Add a label")
self.layout().addWidget(self._button)
self._button.clicked.connect(self._add_label)
self._button.clicked.connect(self.clicked.emit)
def _add_label(self):
global _VALUE
_VALUE += 10
self.layout().addWidget(QtWidgets.QLabel("Add a label"))
self.updateGeometry() # Note: I didn't expect this to work but added it regardless
class _Delegate(QtWidgets.QStyledItemDelegate):
def createEditor(self, parent, option, index):
widget = _Clicker(parent=parent)
viewer = self.parent()
widget.clicked.connect(viewer.updateGeometries) # Note: I expected this to work
return widget
def paint(self, painter, option, index):
super(_Delegate, self).paint(painter, option, index)
viewer = self.parent()
if not viewer.isPersistentEditorOpen(index):
viewer.openPersistentEditor(index)
def setEditorData(self, editor, index):
pass
def updateEditorGeometry(self, editor, option, index):
editor.setGeometry(option.rect)
def sizeHint(self, option, index):
hint = index.data(QtCore.Qt.SizeHintRole)
if hint:
return hint
return super(_Delegate, self).sizeHint(option, index)
class _Model(QtCore.QAbstractItemModel):
def __init__(self, parent=None):
super(_Model, self).__init__(parent=parent)
self._labels = ["foo", "bar"]
def columnCount(self, parent=QtCore.QModelIndex()):
return 1
def data(self, index, role):
if role == QtCore.Qt.SizeHintRole:
return QtCore.QSize(200, _VALUE)
if role != QtCore.Qt.DisplayRole:
return None
return self._labels[index.row()]
def index(self, row, column, parent=QtCore.QModelIndex()):
child = self._labels[row]
return self.createIndex(row, column, child)
def parent(self, index):
return QtCore.QModelIndex()
def rowCount(self, parent=QtCore.QModelIndex()):
if parent.isValid():
return 0
return len(self._labels)
application = QtWidgets.QApplication([])
view = QtWidgets.QTreeView()
view.setModel(_Model())
view.setItemDelegate(_Delegate(parent=view))
view.show()
application.exec_()
How do I get a single row in a QTreeView, which has a persistent editor applied already to it, to tell Qt to resize in response to some change in the editor?
Note: One possible solution would be to close the persistent editor and re-open it to force Qt to redraw the editor widget. This would be generally very slow and not work in my specific situation. Keeping the same persistent editor is important.
A:
As the documentation about updateGeometries() explains, it:
Updates the geometry of the child widgets of the view.
This is used to update the widgets (editors, scroll bars, headers, etc) based on the current view state. It doesn't consider the editor size hints, so that call or the attempt to update the size hint is useless (and, it should go without saying, using global for this is wrong).
In order to properly notify the view that a specific index has updated its size hint, you must use the delegate's sizeHintChanged signal, which should also be emitted when the editor is created in order to ensure that the view makes enough room for it; note that this is normally not required for standard editors (as, being they temporary, they should not try to change the layout of the view), but for persistent editors that are potentially big, it may be necessary.
Other notes:
calling updateGeometry() on the widget is pointless in this case, as adding a widget to a layout automatically results in a LayoutRequest event (which is what updateGeometry() does, among other things);
as explained in createEditor(), "the view's background will shine through unless the editor paints its own background (e.g., with setAutoFillBackground())";
the SizeHintRole of the model should always return a size important for the model (if any), not based on the editor; it's the delegate responsibility to do that, and the model should never be influenced by any of its views;
opening a persistent editor in a paint event is wrong; only drawing related aspects should ever happen in a paint function, most importantly because they are called very often (even hundreds of times per second for item views) so they should be as fast as possible, but also because doing anything that might affect a change in geometry will cause (at least) a recursive call;
signals can be "chained" without using emit: self._button.clicked.connect(self.clicked) would have sufficed;
Considering all the above, there are two possibilities. The problem is that there is no direct correlation between the editor widget and the index it's referred to, so we need to find a way to emit sizeHintChanged with its correct index when the editor is updated.
This can only be done by creating a reference of the index for the editor, but it's important that we use a QPersistentModelIndex for that, as the indexes might change while a persistent editor is opened (for example, when sorting or filtering), and the index provided in the arguments of delegate functions is not able to track these changes.
Emit a custom signal
In this case, we only use a custom signal that is emitted whenever we know that the layout is changed, and we create a local function in createEditor that will eventually emit the sizeHintChanged signal by "reconstructing" the valid index:
class _Clicker(QtWidgets.QWidget):
sizeHintChanged = QtCore.Signal()
def __init__(self, parent=None):
super().__init__(parent)
self.setAutoFillBackground(True)
layout = QtWidgets.QVBoxLayout(self)
self._button = QtWidgets.QPushButton("Add a label")
layout.addWidget(self._button)
self._button.clicked.connect(self._add_label)
def _add_label(self):
self.layout().addWidget(QtWidgets.QLabel("Add a label"))
self.sizeHintChanged.emit()
class _Delegate(QtWidgets.QStyledItemDelegate):
def createEditor(self, parent, option, index):
widget = _Clicker(parent)
persistent = QtCore.QPersistentModelIndex(index)
def emitSizeHintChanged():
index = persistent.model().index(
persistent.row(), persistent.column(),
persistent.parent())
self.sizeHintChanged.emit(index)
widget.sizeHintChanged.connect(emitSizeHintChanged)
self.sizeHintChanged.emit(index)
return widget
# no other functions implemented here
Use the delegate's event filter
We can create a reference for the persistent index in the editor, and then emit the sizeHintChanged signal in the event filter of the delegate whenever a LayoutRequest event is received from the editor:
class _Clicker(QtWidgets.QWidget):
def __init__(self, parent=None):
super().__init__(parent)
self.setAutoFillBackground(True)
layout = QtWidgets.QVBoxLayout(self)
self._button = QtWidgets.QPushButton("Add a label")
layout.addWidget(self._button)
self._button.clicked.connect(self._add_label)
def _add_label(self):
self.layout().addWidget(QtWidgets.QLabel("Add a label"))
class _Delegate(QtWidgets.QStyledItemDelegate):
def createEditor(self, parent, option, index):
widget = _Clicker(parent)
widget.index = QtCore.QPersistentModelIndex(index)
return widget
def eventFilter(self, editor, event):
if event.type() == event.LayoutRequest:
persistent = editor.index
index = persistent.model().index(
persistent.row(), persistent.column(),
persistent.parent())
self.sizeHintChanged.emit(index)
return super().eventFilter(editor, event)
Finally, you should obviously remove the SizeHintRole return in data(), and in order to open all persistent editors you could do something like this:
def openEditors(view, parent=None):
model = view.model()
if parent is None:
parent = QtCore.QModelIndex()
for row in range(model.rowCount(parent)):
for column in range(model.columnCount(parent)):
index = model.index(row, column, parent)
view.openPersistentEditor(index)
if model.rowCount(index):
openEditors(view, index)
# ...
openEditors(view)
A:
I had a similar problem when I was adding new widgets to a QFrame, because the dumb thing was not updating the value of its sizeHint( ) after adding each new widget. It seems that QWidgets (including QFrames) only update their sizeHint( ) when the children widgets are "visible". Somehow, in some occassions Qt sets new children to "not visible" when they are added, don't ask me why. You can see if a widget is visible by calling isVisible( ), and change its visibility status with setVisible(...). I solved my problem by telling to the QFrame that they new child widgets were intended to be visible, by calling setVisible( True ) with each of each child after adding them to the QFrame. Some Qt fundamentalists may say that this is a blasphemous hack that breaks the fabric of space time or something and that I should be burnt at the stake, but I don't care, it works, and it works very well in a quite complex GUI that I have built.
|
Qt - update view size on delegate sizeHint change
|
I have a QTreeView with a QStyledItemDelegate inside of it. When a certain action occurs to the delegate, its size is supposed to change. However I haven't figured out how to get the QTreeView's rows to resize in response to the delegate's editor size changing. I tried QTreeView.updateGeometry and QTreeView.repaint and a couple other things but it doesn't seem to work. Could someone point me in the right direction?
Here's a minimal reproduction (note: The code is hacky in a few places, it's just meant to be a demonstration of the problem, not a demonstration of good MVC).
Steps:
Run the code below
Press either "Add a label" button
Note that the height of the row in the QTreeView does not change no matter how many times either button is clicked.
from PySide2 import QtCore, QtWidgets
_VALUE = 100
class _Clicker(QtWidgets.QWidget):
clicked = QtCore.Signal()
def __init__(self, parent=None):
super(_Clicker, self).__init__(parent=parent)
self.setLayout(QtWidgets.QVBoxLayout())
self._button = QtWidgets.QPushButton("Add a label")
self.layout().addWidget(self._button)
self._button.clicked.connect(self._add_label)
self._button.clicked.connect(self.clicked.emit)
def _add_label(self):
global _VALUE
_VALUE += 10
self.layout().addWidget(QtWidgets.QLabel("Add a label"))
self.updateGeometry() # Note: I didn't expect this to work but added it regardless
class _Delegate(QtWidgets.QStyledItemDelegate):
def createEditor(self, parent, option, index):
widget = _Clicker(parent=parent)
viewer = self.parent()
widget.clicked.connect(viewer.updateGeometries) # Note: I expected this to work
return widget
def paint(self, painter, option, index):
super(_Delegate, self).paint(painter, option, index)
viewer = self.parent()
if not viewer.isPersistentEditorOpen(index):
viewer.openPersistentEditor(index)
def setEditorData(self, editor, index):
pass
def updateEditorGeometry(self, editor, option, index):
editor.setGeometry(option.rect)
def sizeHint(self, option, index):
hint = index.data(QtCore.Qt.SizeHintRole)
if hint:
return hint
return super(_Delegate, self).sizeHint(option, index)
class _Model(QtCore.QAbstractItemModel):
def __init__(self, parent=None):
super(_Model, self).__init__(parent=parent)
self._labels = ["foo", "bar"]
def columnCount(self, parent=QtCore.QModelIndex()):
return 1
def data(self, index, role):
if role == QtCore.Qt.SizeHintRole:
return QtCore.QSize(200, _VALUE)
if role != QtCore.Qt.DisplayRole:
return None
return self._labels[index.row()]
def index(self, row, column, parent=QtCore.QModelIndex()):
child = self._labels[row]
return self.createIndex(row, column, child)
def parent(self, index):
return QtCore.QModelIndex()
def rowCount(self, parent=QtCore.QModelIndex()):
if parent.isValid():
return 0
return len(self._labels)
application = QtWidgets.QApplication([])
view = QtWidgets.QTreeView()
view.setModel(_Model())
view.setItemDelegate(_Delegate(parent=view))
view.show()
application.exec_()
How do I get a single row in a QTreeView, which has a persistent editor applied already to it, to tell Qt to resize in response to some change in the editor?
Note: One possible solution would be to close the persistent editor and re-open it to force Qt to redraw the editor widget. This would be generally very slow and not work in my specific situation. Keeping the same persistent editor is important.
|
[
"As the documentation about updateGeometries() explains, it:\n\nUpdates the geometry of the child widgets of the view.\n\nThis is used to update the widgets (editors, scroll bars, headers, etc) based on the current view state. It doesn't consider the editor size hints, so that call or the attempt to update the size hint is useless (and, it should go without saying, using global for this is wrong).\nIn order to properly notify the view that a specific index has updated its size hint, you must use the delegate's sizeHintChanged signal, which should also be emitted when the editor is created in order to ensure that the view makes enough room for it; note that this is normally not required for standard editors (as, being they temporary, they should not try to change the layout of the view), but for persistent editors that are potentially big, it may be necessary.\nOther notes:\n\ncalling updateGeometry() on the widget is pointless in this case, as adding a widget to a layout automatically results in a LayoutRequest event (which is what updateGeometry() does, among other things);\nas explained in createEditor(), \"the view's background will shine through unless the editor paints its own background (e.g., with setAutoFillBackground())\";\nthe SizeHintRole of the model should always return a size important for the model (if any), not based on the editor; it's the delegate responsibility to do that, and the model should never be influenced by any of its views;\nopening a persistent editor in a paint event is wrong; only drawing related aspects should ever happen in a paint function, most importantly because they are called very often (even hundreds of times per second for item views) so they should be as fast as possible, but also because doing anything that might affect a change in geometry will cause (at least) a recursive call;\nsignals can be \"chained\" without using emit: self._button.clicked.connect(self.clicked) would have sufficed;\n\nConsidering all the above, there are two possibilities. The problem is that there is no direct correlation between the editor widget and the index it's referred to, so we need to find a way to emit sizeHintChanged with its correct index when the editor is updated.\nThis can only be done by creating a reference of the index for the editor, but it's important that we use a QPersistentModelIndex for that, as the indexes might change while a persistent editor is opened (for example, when sorting or filtering), and the index provided in the arguments of delegate functions is not able to track these changes.\nEmit a custom signal\nIn this case, we only use a custom signal that is emitted whenever we know that the layout is changed, and we create a local function in createEditor that will eventually emit the sizeHintChanged signal by \"reconstructing\" the valid index:\nclass _Clicker(QtWidgets.QWidget):\n sizeHintChanged = QtCore.Signal()\n def __init__(self, parent=None):\n super().__init__(parent)\n self.setAutoFillBackground(True)\n\n layout = QtWidgets.QVBoxLayout(self)\n\n self._button = QtWidgets.QPushButton(\"Add a label\")\n layout.addWidget(self._button)\n\n self._button.clicked.connect(self._add_label)\n\n def _add_label(self):\n self.layout().addWidget(QtWidgets.QLabel(\"Add a label\"))\n self.sizeHintChanged.emit()\n\n\nclass _Delegate(QtWidgets.QStyledItemDelegate):\n def createEditor(self, parent, option, index):\n widget = _Clicker(parent)\n persistent = QtCore.QPersistentModelIndex(index)\n\n def emitSizeHintChanged():\n index = persistent.model().index(\n persistent.row(), persistent.column(), \n persistent.parent())\n self.sizeHintChanged.emit(index)\n\n widget.sizeHintChanged.connect(emitSizeHintChanged)\n self.sizeHintChanged.emit(index)\n return widget\n\n # no other functions implemented here\n\nUse the delegate's event filter\nWe can create a reference for the persistent index in the editor, and then emit the sizeHintChanged signal in the event filter of the delegate whenever a LayoutRequest event is received from the editor:\nclass _Clicker(QtWidgets.QWidget):\n def __init__(self, parent=None):\n super().__init__(parent)\n self.setAutoFillBackground(True)\n\n layout = QtWidgets.QVBoxLayout(self)\n\n self._button = QtWidgets.QPushButton(\"Add a label\")\n layout.addWidget(self._button)\n\n self._button.clicked.connect(self._add_label)\n\n def _add_label(self):\n self.layout().addWidget(QtWidgets.QLabel(\"Add a label\"))\n\n\nclass _Delegate(QtWidgets.QStyledItemDelegate):\n def createEditor(self, parent, option, index):\n widget = _Clicker(parent)\n widget.index = QtCore.QPersistentModelIndex(index)\n return widget\n\n def eventFilter(self, editor, event):\n if event.type() == event.LayoutRequest:\n persistent = editor.index\n index = persistent.model().index(\n persistent.row(), persistent.column(), \n persistent.parent())\n self.sizeHintChanged.emit(index)\n return super().eventFilter(editor, event)\n\n\nFinally, you should obviously remove the SizeHintRole return in data(), and in order to open all persistent editors you could do something like this:\ndef openEditors(view, parent=None):\n model = view.model()\n if parent is None:\n parent = QtCore.QModelIndex()\n for row in range(model.rowCount(parent)):\n for column in range(model.columnCount(parent)):\n index = model.index(row, column, parent)\n view.openPersistentEditor(index)\n if model.rowCount(index):\n openEditors(view, index)\n\n# ...\nopenEditors(view)\n\n",
"I had a similar problem when I was adding new widgets to a QFrame, because the dumb thing was not updating the value of its sizeHint( ) after adding each new widget. It seems that QWidgets (including QFrames) only update their sizeHint( ) when the children widgets are \"visible\". Somehow, in some occassions Qt sets new children to \"not visible\" when they are added, don't ask me why. You can see if a widget is visible by calling isVisible( ), and change its visibility status with setVisible(...). I solved my problem by telling to the QFrame that they new child widgets were intended to be visible, by calling setVisible( True ) with each of each child after adding them to the QFrame. Some Qt fundamentalists may say that this is a blasphemous hack that breaks the fabric of space time or something and that I should be burnt at the stake, but I don't care, it works, and it works very well in a quite complex GUI that I have built.\n"
] |
[
1,
1
] |
[] |
[] |
[
"pyside2",
"python"
] |
stackoverflow_0071358160_pyside2_python.txt
|
Q:
Getting ValueError: y contains new labels when using scikit learn's LabelEncoder
I have a series like:
df['ID'] = ['ABC123', 'IDF345', ...]
I'm using scikit's LabelEncoder to convert it to numerical values to be fed into the RandomForestClassifier.
During the training, I'm doing as follows:
le_id = LabelEncoder()
df['ID'] = le_id.fit_transform(df.ID)
But, now for testing/prediction, when I pass in new data, I want to transform the 'ID' from this data based on le_id i.e., if same values are present then transform it according to the above label encoder, otherwise assign a new numerical value.
In the test file, I was doing as follows:
new_df['ID'] = le_dpid.transform(new_df.ID)
But, I'm getting the following error: ValueError: y contains new labels
How do I fix this?? Thanks!
UPDATE:
So the task I have is to use the below (for example) as training data and predict the 'High', 'Mod', 'Low' values for new BankNum, ID combinations. The model should learn the characteristics where a 'High' is given, where a 'Low' is given from the training dataset. For example, below a 'High' is given when there are multiple entries with same BankNum and different IDs.
df =
BankNum | ID | Labels
0098-7772 | AB123 | High
0098-7772 | ED245 | High
0098-7772 | ED343 | High
0870-7771 | ED200 | Mod
0870-7771 | ED100 | Mod
0098-2123 | GH564 | Low
And then predict it on something like:
BankNum | ID |
00982222 | AB999 |
00982222 | AB999 |
00981111 | AB890 |
I'm doing something like this:
df['BankNum'] = df.BankNum.astype(np.float128)
le_id = LabelEncoder()
df['ID'] = le_id.fit_transform(df.ID)
X_train, X_test, y_train, y_test = train_test_split(df[['BankNum', 'ID'], df.Labels, test_size=0.25, random_state=42)
clf = RandomForestClassifier(random_state=42, n_estimators=140)
clf.fit(X_train, y_train)
A:
I think the error message is very clear: Your test dataset contains ID labels which have not been included in your training data set. For this items, the LabelEncoder can not find a suitable numeric value to represent. There are a few ways to solve this problem. You can either try to balance your data set, so that you are sure that each label is not only present in your test but also in your training data. Otherwise, you can try to follow one of the ideas presented here.
One of the possibles solutions is, that you search through your data set at the beginning, get a list of all unique ID values, train the LabelEncoder on this list, and keep the rest of your code just as it is at the moment.
An other possible solution is, to check that the test data have only labels which have been seen in the training process. If there is a new label, you have to set it to some fallback value like unknown_id (or something like this). Doin this, you put all new, unknown IDs in one class; for this items the prediction will then fail, but you can use the rest of your code as it is now.
A:
you can try solution from "sklearn.LabelEncoder with never seen before values" https://stackoverflow.com/a/48169252/9043549
The thing is to create dictionary with classes, than map column and fill new classes with some "known value"
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
suf="_le"
col="a"
df[col+suf] = le.fit_transform(df[col])
dic = dict(zip(le.classes_, le.transform(le.classes_)))
col='b'
df[col+suf]=df[col].map(dic).fillna(dic["c"]).astype(int)
A:
If your data are pd.DataFrame I suggest you this simple solution...
I build a custom transformer that integer maps each categorical feature. When fitted you can transform all the data you want. You can compute also simple label encoding or onehot encoding.
If new unseen categories or NaNs are present in new data:
1] For label encoding, 0 is a special token reserved for mapping these cases.
2] For onehot encoding, all the onehot columns are zeros in these cases.
class FeatureTransformer:
def __init__(self, categorical_features):
self.categorical_features = categorical_features
def fit(self, X):
if not isinstance(X, pd.DataFrame):
raise ValueError("Pass a pandas.DataFrame")
if not isinstance(self.categorical_features, list):
raise ValueError(
"Pass categorical_features as a list of column names")
self.encoding = {}
for c in self.categorical_features:
_, int_id = X[c].factorize()
self.encoding[c] = dict(zip(list(int_id), range(1,len(int_id)+1)))
return self
def transform(self, X, onehot=True):
if not isinstance(X, pd.DataFrame):
raise ValueError("Pass a pandas.DataFrame")
if not hasattr(self, 'encoding'):
raise AttributeError("FeatureTransformer must be fitted")
df = X.drop(self.categorical_features, axis=1)
if onehot: # one-hot encoding
for c in sorted(self.categorical_features):
categories = X[c].map(self.encoding[c]).values
for val in self.encoding[c].values():
df["{}_{}".format(c,val)] = (categories == val).astype('int16')
else: # label encoding
for c in sorted(self.categorical_features):
df[c] = X[c].map(self.encoding[c]).fillna(0)
return df
Usage:
X_train = pd.DataFrame(np.random.randint(10,20, (100,10)))
X_test = pd.DataFrame(np.random.randint(20,30, (100,10)))
ft = FeatureTransformer(categorical_features=[0,1,3])
ft.fit(X_train)
ft.transform(X_test, onehot=False).shape
A:
I'm able to mentally process operations better when dealing in DataFrames. The approach below fits and transforms the LabelEncoder() using the training data, then uses a series of pd.merge joins to map the trained fit/transform encoder values to the test data. When there is a test data value not seen in the training data, the code defaults to the max trained encoder value + 1.
# encode class values as integers
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
encoder.fit(y_train)
encoded_y_train = encoder.transform(y_train)
# make a dataframe of the unique train values and their corresponding encoded integers
y_map = pd.DataFrame({'y_train': y_train, 'encoded_y_train': encoded_y_train})
y_map = y_map.drop_duplicates()
# map the unique test values to the trained encoded integers
y_test_df = pd.DataFrame({'y_test': y_test})
y_test_unique = y_test_df.drop_duplicates()
y_join = pd.merge(y_test_unique, y_map,
left_on = 'y_test', right_on = 'y_train',
how = 'left')
# if the test category is not found in the training category group, then make the
# value the maximum value of the training group + 1
y_join['encoded_y_test'] = np.where(y_join['encoded_y_train'].isnull(),
y_map.shape[0] + 1,
y_join['encoded_y_train']).astype('int')
encoded_y_test = pd.merge(y_test_df, y_join, on = 'y_test', how = 'left') \
.encoded_y_test.values
A:
I found an easy hack around this issue.
Assuming X is the dataframe of features,
First, we need to create a list of dicts which would have the key as the iterable starting from 0 and the corresponding value pair would be the categorical column name. We easily accomplish this using enum.
cat_cols_enum = list(enumerate(X.select_dtypes(include = ['O']).columns))
Then the idea is to create a list of label encoders whose dimension is equal to the number of qualitative(categorical) columns present in the dataframe X.
le = [LabelEncoder() for i in range(len(cat_cols_enum))]
Next and the last part would be fitting each of the label encoders present in the list of encoders with the unique values of each of the categorical columns present in the list of dicts respectively.
for i in cat_cols_enum: le[i[0]].fit(X[i[1]].value_counts().index)
Now, we can transform the labels to their respective encodings using
for i in cat_cols_enum:
X[i[1]] = le[i[0]].transform(X[i[1]])
A:
This error comes when transform function getting any new value for which LabelEncoder try to encode and because in training samples, when you are using fit_transform, that specific value did not present in the corpus. So there is a hack, whether use all the unique values with fit_transform function if you are sure that no new value will come further, or try some different encoding method which suits on the problem statement like HashingEncoder.
Here is the example if no further new values will come in testing
le_id.fit_transform(list(set(df['ID'].unique()).union(set(new_df['ID'].unique()))))
new_df['ID'] = le_id.transform(new_df.ID)
A:
This is in fact a known bug on LabelEncoder : BUG for fit_transform ... basically you have to fit it and then transform. It will work fine ! A suggestion is to keep a dictionary of your encoders to each and every column so that in the inverse transform you are able to retrieve the original categorical values.
|
Getting ValueError: y contains new labels when using scikit learn's LabelEncoder
|
I have a series like:
df['ID'] = ['ABC123', 'IDF345', ...]
I'm using scikit's LabelEncoder to convert it to numerical values to be fed into the RandomForestClassifier.
During the training, I'm doing as follows:
le_id = LabelEncoder()
df['ID'] = le_id.fit_transform(df.ID)
But, now for testing/prediction, when I pass in new data, I want to transform the 'ID' from this data based on le_id i.e., if same values are present then transform it according to the above label encoder, otherwise assign a new numerical value.
In the test file, I was doing as follows:
new_df['ID'] = le_dpid.transform(new_df.ID)
But, I'm getting the following error: ValueError: y contains new labels
How do I fix this?? Thanks!
UPDATE:
So the task I have is to use the below (for example) as training data and predict the 'High', 'Mod', 'Low' values for new BankNum, ID combinations. The model should learn the characteristics where a 'High' is given, where a 'Low' is given from the training dataset. For example, below a 'High' is given when there are multiple entries with same BankNum and different IDs.
df =
BankNum | ID | Labels
0098-7772 | AB123 | High
0098-7772 | ED245 | High
0098-7772 | ED343 | High
0870-7771 | ED200 | Mod
0870-7771 | ED100 | Mod
0098-2123 | GH564 | Low
And then predict it on something like:
BankNum | ID |
00982222 | AB999 |
00982222 | AB999 |
00981111 | AB890 |
I'm doing something like this:
df['BankNum'] = df.BankNum.astype(np.float128)
le_id = LabelEncoder()
df['ID'] = le_id.fit_transform(df.ID)
X_train, X_test, y_train, y_test = train_test_split(df[['BankNum', 'ID'], df.Labels, test_size=0.25, random_state=42)
clf = RandomForestClassifier(random_state=42, n_estimators=140)
clf.fit(X_train, y_train)
|
[
"I think the error message is very clear: Your test dataset contains ID labels which have not been included in your training data set. For this items, the LabelEncoder can not find a suitable numeric value to represent. There are a few ways to solve this problem. You can either try to balance your data set, so that you are sure that each label is not only present in your test but also in your training data. Otherwise, you can try to follow one of the ideas presented here. \nOne of the possibles solutions is, that you search through your data set at the beginning, get a list of all unique ID values, train the LabelEncoder on this list, and keep the rest of your code just as it is at the moment.\nAn other possible solution is, to check that the test data have only labels which have been seen in the training process. If there is a new label, you have to set it to some fallback value like unknown_id (or something like this). Doin this, you put all new, unknown IDs in one class; for this items the prediction will then fail, but you can use the rest of your code as it is now.\n",
"you can try solution from \"sklearn.LabelEncoder with never seen before values\" https://stackoverflow.com/a/48169252/9043549\nThe thing is to create dictionary with classes, than map column and fill new classes with some \"known value\"\nfrom sklearn.preprocessing import LabelEncoder\nle = LabelEncoder()\nsuf=\"_le\"\ncol=\"a\"\ndf[col+suf] = le.fit_transform(df[col])\ndic = dict(zip(le.classes_, le.transform(le.classes_)))\ncol='b'\ndf[col+suf]=df[col].map(dic).fillna(dic[\"c\"]).astype(int) \n\n",
"If your data are pd.DataFrame I suggest you this simple solution...\nI build a custom transformer that integer maps each categorical feature. When fitted you can transform all the data you want. You can compute also simple label encoding or onehot encoding.\nIf new unseen categories or NaNs are present in new data:\n1] For label encoding, 0 is a special token reserved for mapping these cases.\n2] For onehot encoding, all the onehot columns are zeros in these cases.\nclass FeatureTransformer:\n \n def __init__(self, categorical_features):\n self.categorical_features = categorical_features\n \n def fit(self, X):\n\n if not isinstance(X, pd.DataFrame):\n raise ValueError(\"Pass a pandas.DataFrame\")\n \n if not isinstance(self.categorical_features, list):\n raise ValueError(\n \"Pass categorical_features as a list of column names\")\n \n self.encoding = {}\n for c in self.categorical_features:\n\n _, int_id = X[c].factorize()\n self.encoding[c] = dict(zip(list(int_id), range(1,len(int_id)+1)))\n \n return self\n\n def transform(self, X, onehot=True):\n\n if not isinstance(X, pd.DataFrame):\n raise ValueError(\"Pass a pandas.DataFrame\")\n\n if not hasattr(self, 'encoding'):\n raise AttributeError(\"FeatureTransformer must be fitted\")\n \n df = X.drop(self.categorical_features, axis=1)\n \n if onehot: # one-hot encoding\n for c in sorted(self.categorical_features): \n categories = X[c].map(self.encoding[c]).values\n for val in self.encoding[c].values():\n df[\"{}_{}\".format(c,val)] = (categories == val).astype('int16')\n else: # label encoding\n for c in sorted(self.categorical_features):\n df[c] = X[c].map(self.encoding[c]).fillna(0)\n \n return df\n\nUsage:\nX_train = pd.DataFrame(np.random.randint(10,20, (100,10)))\nX_test = pd.DataFrame(np.random.randint(20,30, (100,10)))\n\nft = FeatureTransformer(categorical_features=[0,1,3])\nft.fit(X_train)\n\nft.transform(X_test, onehot=False).shape\n\n",
"I'm able to mentally process operations better when dealing in DataFrames. The approach below fits and transforms the LabelEncoder() using the training data, then uses a series of pd.merge joins to map the trained fit/transform encoder values to the test data. When there is a test data value not seen in the training data, the code defaults to the max trained encoder value + 1.\n# encode class values as integers\nimport numpy as np\nimport pandas as pd\nfrom sklearn.preprocessing import LabelEncoder\nencoder = LabelEncoder()\nencoder.fit(y_train)\nencoded_y_train = encoder.transform(y_train)\n\n# make a dataframe of the unique train values and their corresponding encoded integers\ny_map = pd.DataFrame({'y_train': y_train, 'encoded_y_train': encoded_y_train})\ny_map = y_map.drop_duplicates()\n\n# map the unique test values to the trained encoded integers\ny_test_df = pd.DataFrame({'y_test': y_test})\ny_test_unique = y_test_df.drop_duplicates()\ny_join = pd.merge(y_test_unique, y_map, \n left_on = 'y_test', right_on = 'y_train', \n how = 'left')\n\n# if the test category is not found in the training category group, then make the \n# value the maximum value of the training group + 1 \ny_join['encoded_y_test'] = np.where(y_join['encoded_y_train'].isnull(), \n y_map.shape[0] + 1, \n y_join['encoded_y_train']).astype('int')\n\nencoded_y_test = pd.merge(y_test_df, y_join, on = 'y_test', how = 'left') \\\n .encoded_y_test.values\n\n",
"I found an easy hack around this issue.\nAssuming X is the dataframe of features,\n\nFirst, we need to create a list of dicts which would have the key as the iterable starting from 0 and the corresponding value pair would be the categorical column name. We easily accomplish this using enum.\ncat_cols_enum = list(enumerate(X.select_dtypes(include = ['O']).columns))\n\nThen the idea is to create a list of label encoders whose dimension is equal to the number of qualitative(categorical) columns present in the dataframe X.\nle = [LabelEncoder() for i in range(len(cat_cols_enum))]\n\nNext and the last part would be fitting each of the label encoders present in the list of encoders with the unique values of each of the categorical columns present in the list of dicts respectively.\nfor i in cat_cols_enum: le[i[0]].fit(X[i[1]].value_counts().index)\n\n\nNow, we can transform the labels to their respective encodings using\nfor i in cat_cols_enum:\nX[i[1]] = le[i[0]].transform(X[i[1]])\n\n",
"This error comes when transform function getting any new value for which LabelEncoder try to encode and because in training samples, when you are using fit_transform, that specific value did not present in the corpus. So there is a hack, whether use all the unique values with fit_transform function if you are sure that no new value will come further, or try some different encoding method which suits on the problem statement like HashingEncoder.\nHere is the example if no further new values will come in testing\nle_id.fit_transform(list(set(df['ID'].unique()).union(set(new_df['ID'].unique())))) \nnew_df['ID'] = le_id.transform(new_df.ID)\n\n",
"This is in fact a known bug on LabelEncoder : BUG for fit_transform ... basically you have to fit it and then transform. It will work fine ! A suggestion is to keep a dictionary of your encoders to each and every column so that in the inverse transform you are able to retrieve the original categorical values.\n"
] |
[
8,
4,
2,
0,
0,
0,
0
] |
[
"I hope this helps someone as it's more recent.\nsklearn uses the fit_transform to perform the fit function and transform function directing on label encoding.\nTo solve the problem for Y label throwing error for unseen values, use:\nfrom sklearn.preprocessing import LabelEncoder\nle = LabelEncoder() \nle.fit_transform(Col) \n\nThis solves it!\n",
"I used \n le.fit_transform(Col) \n\nand I was able to resolve the issue. It does fit and transform both. we dont need to worry about unknown values in the test split\n"
] |
[
-1,
-4
] |
[
"categorical_data",
"encoding",
"machine_learning",
"python",
"scikit_learn"
] |
stackoverflow_0046288517_categorical_data_encoding_machine_learning_python_scikit_learn.txt
|
Q:
cannot find element after being redirected to a webpage - python selenium
this is not the exact code but basically the bug is the same. I use python selenium to go on a website. There are two buttons. The first one redirects me to one page. The second button is on that page that is has redirectd me to. For some reason, it says that the button on the second page cannot be found.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = webdriver.ChromeOptions()
options.add_experimental_option('excludeSwitches', ['enable-logging'])
driver = webdriver.Chrome(options=options, executable_path=r"C:\Users\angel\Downloads\chromedriver.exe")
#techwithtim cause why not
driver.get('https://www.techwithtim.net')
driver.implicitly_wait(3)
#first button
buttonPath = r"/html/body/div[2]/div/div[2]/aside[2]/div/ul/li[2]/a"
try:
button = driver.find_element(By.XPATH, buttonPath)
button.click()
except:
print("bad")
#second button on newly redirected webpage
secondPath = r"/html/body/nav/div/div/ul/li[1]/a"
secondButton = driver.find_element(By.CLASS_NAME, secondPath)
secondButton.click()
I redid my code into the smallest form above and still, it doesn't work for me. I made a try except block on the second button and it printed the page source which prints the html for the first webpage, not the redirected one. How can I fix this?
Sorry if this is an easy question since I am still very new to programming, any help is appreciated.
A:
Try this:
# Needed libs
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
#Define web driver as a Chrome driver and navigate. I am in Linux, in Windows you can define your browser in the same way you were doing it.
driver = webdriver.Chrome()
driver.maximize_window()
url = 'https://www.techwithtim.net'
driver.get(url)
# First button
buttonPath = "//li[@id='menu-item-402']/a"
button = WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.XPATH, buttonPath)))
button.click()
# Second button on newly redirected webpage
secondPath = "//li[@id='menu-item-156']/a"
secondButton = WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.XPATH, secondPath)))
secondButton.click()
2 Advices:
Try to locate your elements with simple locators, if you see my xpath it is really simple.
Always, when you have to interact with an element, try to wait for it till it is ready.
|
cannot find element after being redirected to a webpage - python selenium
|
this is not the exact code but basically the bug is the same. I use python selenium to go on a website. There are two buttons. The first one redirects me to one page. The second button is on that page that is has redirectd me to. For some reason, it says that the button on the second page cannot be found.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = webdriver.ChromeOptions()
options.add_experimental_option('excludeSwitches', ['enable-logging'])
driver = webdriver.Chrome(options=options, executable_path=r"C:\Users\angel\Downloads\chromedriver.exe")
#techwithtim cause why not
driver.get('https://www.techwithtim.net')
driver.implicitly_wait(3)
#first button
buttonPath = r"/html/body/div[2]/div/div[2]/aside[2]/div/ul/li[2]/a"
try:
button = driver.find_element(By.XPATH, buttonPath)
button.click()
except:
print("bad")
#second button on newly redirected webpage
secondPath = r"/html/body/nav/div/div/ul/li[1]/a"
secondButton = driver.find_element(By.CLASS_NAME, secondPath)
secondButton.click()
I redid my code into the smallest form above and still, it doesn't work for me. I made a try except block on the second button and it printed the page source which prints the html for the first webpage, not the redirected one. How can I fix this?
Sorry if this is an easy question since I am still very new to programming, any help is appreciated.
|
[
"Try this:\n# Needed libs\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\n#Define web driver as a Chrome driver and navigate. I am in Linux, in Windows you can define your browser in the same way you were doing it.\ndriver = webdriver.Chrome()\ndriver.maximize_window()\n\nurl = 'https://www.techwithtim.net'\ndriver.get(url)\n\n# First button\nbuttonPath = \"//li[@id='menu-item-402']/a\"\nbutton = WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.XPATH, buttonPath)))\nbutton.click()\n\n\n# Second button on newly redirected webpage\nsecondPath = \"//li[@id='menu-item-156']/a\"\nsecondButton = WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.XPATH, secondPath)))\nsecondButton.click()\n\n2 Advices:\n\nTry to locate your elements with simple locators, if you see my xpath it is really simple.\nAlways, when you have to interact with an element, try to wait for it till it is ready.\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"selenium",
"selenium_chromedriver",
"selenium_webdriver"
] |
stackoverflow_0074581734_python_selenium_selenium_chromedriver_selenium_webdriver.txt
|
Q:
Adding test's docstring to the html report of a parametrized test as Description (pytest, Python)
I am running a parametrized test and I want to use kind of parametrized docstring in the html report. Normally, without the parametrization, it is the docstring of each test that I see as the description for the particular test. Now, with the parametrization, it is, of course, always the same text. Can I add a name, or some unique text from a file for each test?
For now, I have this easy setup:
def load_json_file(filename) -> list:
""" Load the data from the given json file, return a list. """
with open(filename, 'r') as openfile:
json_object = json.load(openfile)
return list(json_object.items())
# data source from a file
def data_from_browser():
return load_json_file('given_data.json')
# data source will be later from a browser
def desired_data():
return load_json_file('desired_data.json')
# for the purpose of the html report
def list_of_ids():
# could be, probably, loaded from a file
return ["set1", "set2", "set3"]
@pytest.mark.parametrize("given, expected", list(zip(desired_data(), data_from_browser())), ids=list_of_ids())
def test_timedistance_v0(given, expected):
""" General docstring for the parametrized test. """
assert given[0] == expected[0] # title
dict_diff = DeepDiff(given[1], expected[1]) # value
assert len(dict_diff) == 0, 'The given value is not equal to the desired value.'
The input data are like this (same for both now):
{"DISTINCT IP": "1,722",
"TYPES": {"Error": "1", "Name": "14570"},
"FROM": [["AV", "7,738", "20.93%"], ["AA", "4,191", "11.34%"], ["AB", "4,160", "11.25%"]]}
I am generating the report as
> pytest -v -s test_my_param.py --html="reports/report_param.html"
which looks like this (you can see the IDs and the docstring)
example of the html report
Can I somehow add maybe the IDs into the docstring (Description) section?
Thanks for any hints
Michala
A:
If you're following the guide to modify the results table from the pytest-html plugin documentation, you can use the item object to add the ID into the cell with item.callspec.id:
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
report = outcome.get_result()
report.description = str(item.function.__doc__ + item.callspec.id)
|
Adding test's docstring to the html report of a parametrized test as Description (pytest, Python)
|
I am running a parametrized test and I want to use kind of parametrized docstring in the html report. Normally, without the parametrization, it is the docstring of each test that I see as the description for the particular test. Now, with the parametrization, it is, of course, always the same text. Can I add a name, or some unique text from a file for each test?
For now, I have this easy setup:
def load_json_file(filename) -> list:
""" Load the data from the given json file, return a list. """
with open(filename, 'r') as openfile:
json_object = json.load(openfile)
return list(json_object.items())
# data source from a file
def data_from_browser():
return load_json_file('given_data.json')
# data source will be later from a browser
def desired_data():
return load_json_file('desired_data.json')
# for the purpose of the html report
def list_of_ids():
# could be, probably, loaded from a file
return ["set1", "set2", "set3"]
@pytest.mark.parametrize("given, expected", list(zip(desired_data(), data_from_browser())), ids=list_of_ids())
def test_timedistance_v0(given, expected):
""" General docstring for the parametrized test. """
assert given[0] == expected[0] # title
dict_diff = DeepDiff(given[1], expected[1]) # value
assert len(dict_diff) == 0, 'The given value is not equal to the desired value.'
The input data are like this (same for both now):
{"DISTINCT IP": "1,722",
"TYPES": {"Error": "1", "Name": "14570"},
"FROM": [["AV", "7,738", "20.93%"], ["AA", "4,191", "11.34%"], ["AB", "4,160", "11.25%"]]}
I am generating the report as
> pytest -v -s test_my_param.py --html="reports/report_param.html"
which looks like this (you can see the IDs and the docstring)
example of the html report
Can I somehow add maybe the IDs into the docstring (Description) section?
Thanks for any hints
Michala
|
[
"If you're following the guide to modify the results table from the pytest-html plugin documentation, you can use the item object to add the ID into the cell with item.callspec.id:\n@pytest.hookimpl(hookwrapper=True)\ndef pytest_runtest_makereport(item, call):\n outcome = yield\n report = outcome.get_result()\n report.description = str(item.function.__doc__ + item.callspec.id)\n\n\n\n"
] |
[
0
] |
[] |
[] |
[
"parametrized_testing",
"pytest",
"python"
] |
stackoverflow_0074547592_parametrized_testing_pytest_python.txt
|
Q:
Adding Key-Value pair to a Series that does not have a given Key (Pandas)
I want to update a series if it is missing a key, but my code is generating an error.
This is my code:
for item in list:
if item not in my_series.keys():
my_series = my_series[item] = 0
Where my_series is a series of dtype int64. It's actually a value count.
My code above is generating the following error
'int' object does not support item assignment
A:
What do you mean by "series"? There's no such data type in python if I'm not mistaken. You seem to use it as it was a dict. Do you need to set default value to 0 for a key "item"?
If so:
for item in <definitely_list_is_a_bad_name>:
my_series[item] = my_series.get(item) if my_series.get(item, None) is not None else 0
A:
From what I read in the docs, a Pandas series functions much like a dict, so my comment remains valid:
import pandas as pd
d = {'a': 1, 'b': 2, 'c': 3}
my_series = pd.Series(data=d, index=['a', 'b', 'c'])
my_list = ['a','h','c','d']
for item in my_list:
if item not in my_series:
my_series[item] = 0
print(my_series)
# a 1
# b 2
# c 3
# h 0
# d 0
# dtype: int64
Btw, as John Doe mentioned, "list" is a bad choice of a name; don't use Python keywords as objects names, else you will overwrite those keywords and risk problems later.
|
Adding Key-Value pair to a Series that does not have a given Key (Pandas)
|
I want to update a series if it is missing a key, but my code is generating an error.
This is my code:
for item in list:
if item not in my_series.keys():
my_series = my_series[item] = 0
Where my_series is a series of dtype int64. It's actually a value count.
My code above is generating the following error
'int' object does not support item assignment
|
[
"What do you mean by \"series\"? There's no such data type in python if I'm not mistaken. You seem to use it as it was a dict. Do you need to set default value to 0 for a key \"item\"?\nIf so:\nfor item in <definitely_list_is_a_bad_name>:\n my_series[item] = my_series.get(item) if my_series.get(item, None) is not None else 0 \n\n",
"From what I read in the docs, a Pandas series functions much like a dict, so my comment remains valid:\nimport pandas as pd\nd = {'a': 1, 'b': 2, 'c': 3}\nmy_series = pd.Series(data=d, index=['a', 'b', 'c'])\nmy_list = ['a','h','c','d']\n\nfor item in my_list:\n if item not in my_series:\n my_series[item] = 0\n \nprint(my_series)\n\n# a 1\n# b 2\n# c 3\n# h 0\n# d 0\n# dtype: int64\n\nBtw, as John Doe mentioned, \"list\" is a bad choice of a name; don't use Python keywords as objects names, else you will overwrite those keywords and risk problems later.\n"
] |
[
0,
0
] |
[] |
[] |
[
"list",
"pandas",
"python",
"series"
] |
stackoverflow_0074577836_list_pandas_python_series.txt
|
Q:
Splitting an image in half, leaving one half transparent, keeping the same image dimensions
I have an image, I want to split it vertically. When I do this I want to maintain the same aspect ratio (1024x1024), but make the other half of each image transparent. (Imagine going into photoshop, and just deleting half of an image leaving the transparent mask.)
I used image slicer to easily slice in half vertically. Then PIL to paste a new image. I get the ValueError: images do not match, and so I was wondering if there is an easier way.
from image_slicer import slice
from PIL import Image
slice('stickfigure.png', 2)
img = Image.open("stickfigure_01_01.png")
img.show()
img2 = Image.open("stickfigure_01_02.png")
img2.show()
background = Image.open("emptycanvas.png")
foreground = Image.open("stickfigure_01_01.png")
final = Image.new("RGBA", background.size)
final = Image.alpha_composite(final, background)
final = Image.alpha_composite(final, foreground)
Emptycanvas is just a 1024x1024 blank transparent png.
A:
You don't really actually want to split the image in half, since you want to retain the original dimensions. So you actually just want to make one half transparent - remember the alpha/transparency is just a layer in your image, so all you need is a new, alpha layer that is white where you want to see the original image and black where you don't.
So, let's make a radial gradient, get its size, and make a new black alpha channel the same size. Then draw a white rectangle on the left half and push that into your image as the alpha/transparency channel:
#!/usr/bin/env python3
from PIL import Image, ImageDraw
# Create radial gradient, and get its dimensions
im = Image.radial_gradient('L')
w, h = im.size
im.save('DEBUG-initial.png')
# Create single channel alpha/transparency layer, same size, initially all black
alpha = Image.new('L', (w,h))
draw = ImageDraw.Draw(alpha)
# Fill left half with white
draw.rectangle((0,0,int(w/2),h), fill='white')
alpha.save('DEBUG-alpha.png')
# Push that alpha layer into gradient image
im.putalpha(alpha)
im.save('result.png')
Note that as you didn't supply any representative image, I don't know whether your image already had some transparency, and if it did, this simple method will replace it throughout the image. If that is the case, you should extract the existing transparency from the image and draw onto that, rather than assuming you can replace the transparency wholesale like I did. You can use either of the following approaches:
R,G,B,A = im.split()
#
# modify A here
#
result = Image.merge('RGBA', (R,G,B,A))
or
alpha = im.getchannel('A')
#
# modify alpha here
#
im.putalpha(A)
Although you didn't mention it in your question, it seems you not only want to make one half transparent, but also want to move the remaining visible part to the other side!
You need to add this near the start to copy the right half:
# Copy right half
rhs = im.crop((int(w/2),0,w,h))
and this near the end to paste the copied right half into the left half:
im.paste(rhs, (0,0))
|
Splitting an image in half, leaving one half transparent, keeping the same image dimensions
|
I have an image, I want to split it vertically. When I do this I want to maintain the same aspect ratio (1024x1024), but make the other half of each image transparent. (Imagine going into photoshop, and just deleting half of an image leaving the transparent mask.)
I used image slicer to easily slice in half vertically. Then PIL to paste a new image. I get the ValueError: images do not match, and so I was wondering if there is an easier way.
from image_slicer import slice
from PIL import Image
slice('stickfigure.png', 2)
img = Image.open("stickfigure_01_01.png")
img.show()
img2 = Image.open("stickfigure_01_02.png")
img2.show()
background = Image.open("emptycanvas.png")
foreground = Image.open("stickfigure_01_01.png")
final = Image.new("RGBA", background.size)
final = Image.alpha_composite(final, background)
final = Image.alpha_composite(final, foreground)
Emptycanvas is just a 1024x1024 blank transparent png.
|
[
"You don't really actually want to split the image in half, since you want to retain the original dimensions. So you actually just want to make one half transparent - remember the alpha/transparency is just a layer in your image, so all you need is a new, alpha layer that is white where you want to see the original image and black where you don't.\nSo, let's make a radial gradient, get its size, and make a new black alpha channel the same size. Then draw a white rectangle on the left half and push that into your image as the alpha/transparency channel:\n#!/usr/bin/env python3\n\nfrom PIL import Image, ImageDraw\n\n# Create radial gradient, and get its dimensions\nim = Image.radial_gradient('L')\nw, h = im.size\nim.save('DEBUG-initial.png')\n\n\n# Create single channel alpha/transparency layer, same size, initially all black\nalpha = Image.new('L', (w,h))\ndraw = ImageDraw.Draw(alpha)\n\n# Fill left half with white\ndraw.rectangle((0,0,int(w/2),h), fill='white')\nalpha.save('DEBUG-alpha.png')\n\n\n# Push that alpha layer into gradient image\nim.putalpha(alpha)\nim.save('result.png')\n\n\n\nNote that as you didn't supply any representative image, I don't know whether your image already had some transparency, and if it did, this simple method will replace it throughout the image. If that is the case, you should extract the existing transparency from the image and draw onto that, rather than assuming you can replace the transparency wholesale like I did. You can use either of the following approaches:\nR,G,B,A = im.split() \n#\n# modify A here\n#\nresult = Image.merge('RGBA', (R,G,B,A))\n\nor\nalpha = im.getchannel('A')\n#\n# modify alpha here\n# \nim.putalpha(A)\n\n\nAlthough you didn't mention it in your question, it seems you not only want to make one half transparent, but also want to move the remaining visible part to the other side!\nYou need to add this near the start to copy the right half:\n# Copy right half\nrhs = im.crop((int(w/2),0,w,h))\n\nand this near the end to paste the copied right half into the left half:\nim.paste(rhs, (0,0))\n\n"
] |
[
1
] |
[] |
[] |
[
"image",
"python",
"python_imaging_library"
] |
stackoverflow_0074578586_image_python_python_imaging_library.txt
|
Q:
Setting SQLAlchemy autoincrement start value
The autoincrement argument in SQLAlchemy seems to be only True and False, but I want to set the pre-defined value aid = 1001, the via autoincrement aid = 1002 when the next insert is done.
In SQL, can be changed like:
ALTER TABLE article AUTO_INCREMENT = 1001;
I'm using MySQL and I have tried following, but it doesn't work:
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Article(Base):
__tablename__ = 'article'
aid = Column(INTEGER(unsigned=True, zerofill=True),
autoincrement=1001, primary_key=True)
So, how can I get that? Thanks in advance!
A:
You can achieve this by using DDLEvents. This will allow you to run additional SQL statements just after the CREATE TABLE ran. Look at the examples in the link, but I am guessing your code will look similar to below:
from sqlalchemy import event
from sqlalchemy import DDL
event.listen(
Article.__table__,
"after_create",
DDL("ALTER TABLE %(table)s AUTO_INCREMENT = 1001;")
)
A:
According to the docs:
autoincrement –
This flag may be set to False to indicate an integer primary key column that should not be considered to be the “autoincrement” column, that is the integer primary key column which generates values implicitly upon INSERT and whose value is usually returned via the DBAPI cursor.lastrowid attribute. It defaults to True to satisfy the common use case of a table with a single integer primary key column.
So, autoincrement is only a flag to let SQLAlchemy know whether it's the primary key you want to increment.
What you're trying to do is to create a custom autoincrement sequence.
So, your example, I think, should look something like:
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.schema import Sequence
Base = declarative_base()
class Article(Base):
__tablename__ = 'article'
aid = Column(INTEGER(unsigned=True, zerofill=True),
Sequence('article_aid_seq', start=1001, increment=1),
primary_key=True)
Note, I don't know whether you're using PostgreSQL or not, so you should make note of the following if you are:
The Sequence object also implements special functionality to accommodate Postgresql’s SERIAL datatype. The SERIAL type in PG automatically generates a sequence that is used implicitly during inserts. This means that if a Table object defines a Sequence on its primary key column so that it works with Oracle and Firebird, the Sequence would get in the way of the “implicit” sequence that PG would normally use. For this use case, add the flag optional=True to the Sequence object - this indicates that the Sequence should only be used if the database provides no other option for generating primary key identifiers.
A:
I couldn't get the other answers to work using mysql and flask-migrate so I did the following inside a migration file.
from app import db
db.engine.execute("ALTER TABLE myDB.myTable AUTO_INCREMENT = 2000;")
Be warned that if you regenerated your migration files this will get overwritten.
A:
I know this is an old question but I recently had to figure this out and none of the available answer were quite what I needed. The solution I found relied on Sequence in SQLAlchemy. For whatever reason, I could not get it to work when I called the Sequence constructor within the Column constructor as has been referenced above. As a note, I am using PostgreSQL.
For your answer I would have put it as such:
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Sequence, Column, Integer
import os
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Sequence, Integer, create_engine
Base = declarative_base()
def connection():
engine = create_engine(f"postgresql://postgres:{os.getenv('PGPASSWORD')}@localhost:{os.getenv('PGPORT')}/test")
return engine
engine = connection()
class Article(Base):
__tablename__ = 'article'
seq = Sequence('article_aid_seq', start=1001)
aid = Column('aid', Integer, seq, server_default=seq.next_value(), primary_key=True)
Base.metadata.create_all(engine)
This then can be called in PostgreSQL with:
insert into article (aid) values (DEFAULT);
select * from article;
aid
------
1001
(1 row)
Hope this helps someone as it took me a while
A:
You can do it using the mysql_auto_increment table create option. There are mysql_engine and mysql_default_charset options too, which might be also handy:
article = Table(
'article', metadata,
Column('aid', INTEGER(unsigned=True, zerofill=True), primary_key=True),
mysql_engine='InnoDB',
mysql_default_charset='utf8',
mysql_auto_increment='1001',
)
The above will generate:
CREATE TABLE article (
aid INTEGER UNSIGNED ZEROFILL NOT NULL AUTO_INCREMENT,
PRIMARY KEY (aid)
)ENGINE=InnoDB AUTO_INCREMENT=1001 DEFAULT CHARSET=utf8
A:
If your database supports Identity columns*, the starting value can be set like this:
import sqlalchemy as sa
tbl = sa.Table(
't10494033',
sa.MetaData(),
sa.Column('id', sa.Integer, sa.Identity(start=200, always=True), primary_key=True),
)
Resulting in this DDL output:
CREATE TABLE t10494033 (
id INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 200),
PRIMARY KEY (id)
)
Identity(..) is ignored if the backend does not support it.
* PostgreSQL 10+, Oracle 12+ and MSSQL, according to the linked documentation above.
|
Setting SQLAlchemy autoincrement start value
|
The autoincrement argument in SQLAlchemy seems to be only True and False, but I want to set the pre-defined value aid = 1001, the via autoincrement aid = 1002 when the next insert is done.
In SQL, can be changed like:
ALTER TABLE article AUTO_INCREMENT = 1001;
I'm using MySQL and I have tried following, but it doesn't work:
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Article(Base):
__tablename__ = 'article'
aid = Column(INTEGER(unsigned=True, zerofill=True),
autoincrement=1001, primary_key=True)
So, how can I get that? Thanks in advance!
|
[
"You can achieve this by using DDLEvents. This will allow you to run additional SQL statements just after the CREATE TABLE ran. Look at the examples in the link, but I am guessing your code will look similar to below:\nfrom sqlalchemy import event\nfrom sqlalchemy import DDL\nevent.listen(\n Article.__table__,\n \"after_create\",\n DDL(\"ALTER TABLE %(table)s AUTO_INCREMENT = 1001;\")\n)\n\n",
"According to the docs:\n\nautoincrement –\n This flag may be set to False to indicate an integer primary key column that should not be considered to be the “autoincrement” column, that is the integer primary key column which generates values implicitly upon INSERT and whose value is usually returned via the DBAPI cursor.lastrowid attribute. It defaults to True to satisfy the common use case of a table with a single integer primary key column. \n\nSo, autoincrement is only a flag to let SQLAlchemy know whether it's the primary key you want to increment.\nWhat you're trying to do is to create a custom autoincrement sequence.\nSo, your example, I think, should look something like:\nfrom sqlalchemy.ext.declarative import declarative_base\nfrom sqlalchemy.schema import Sequence\n\nBase = declarative_base()\n\nclass Article(Base):\n __tablename__ = 'article'\n aid = Column(INTEGER(unsigned=True, zerofill=True), \n Sequence('article_aid_seq', start=1001, increment=1), \n primary_key=True)\n\nNote, I don't know whether you're using PostgreSQL or not, so you should make note of the following if you are:\n\nThe Sequence object also implements special functionality to accommodate Postgresql’s SERIAL datatype. The SERIAL type in PG automatically generates a sequence that is used implicitly during inserts. This means that if a Table object defines a Sequence on its primary key column so that it works with Oracle and Firebird, the Sequence would get in the way of the “implicit” sequence that PG would normally use. For this use case, add the flag optional=True to the Sequence object - this indicates that the Sequence should only be used if the database provides no other option for generating primary key identifiers.\n\n",
"I couldn't get the other answers to work using mysql and flask-migrate so I did the following inside a migration file. \nfrom app import db\ndb.engine.execute(\"ALTER TABLE myDB.myTable AUTO_INCREMENT = 2000;\")\n\nBe warned that if you regenerated your migration files this will get overwritten. \n",
"I know this is an old question but I recently had to figure this out and none of the available answer were quite what I needed. The solution I found relied on Sequence in SQLAlchemy. For whatever reason, I could not get it to work when I called the Sequence constructor within the Column constructor as has been referenced above. As a note, I am using PostgreSQL.\nFor your answer I would have put it as such:\nfrom sqlalchemy.ext.declarative import declarative_base\nfrom sqlalchemy import Sequence, Column, Integer\n\nimport os\nfrom sqlalchemy.ext.declarative import declarative_base\nfrom sqlalchemy import Column, Sequence, Integer, create_engine\nBase = declarative_base()\n\ndef connection():\n engine = create_engine(f\"postgresql://postgres:{os.getenv('PGPASSWORD')}@localhost:{os.getenv('PGPORT')}/test\")\n return engine\n\nengine = connection()\n\nclass Article(Base):\n __tablename__ = 'article'\n seq = Sequence('article_aid_seq', start=1001)\n aid = Column('aid', Integer, seq, server_default=seq.next_value(), primary_key=True)\n\nBase.metadata.create_all(engine)\n\nThis then can be called in PostgreSQL with:\ninsert into article (aid) values (DEFAULT);\nselect * from article;\n\n aid \n------\n 1001\n(1 row)\n\nHope this helps someone as it took me a while\n",
"You can do it using the mysql_auto_increment table create option. There are mysql_engine and mysql_default_charset options too, which might be also handy:\narticle = Table(\n 'article', metadata,\n Column('aid', INTEGER(unsigned=True, zerofill=True), primary_key=True),\n mysql_engine='InnoDB',\n mysql_default_charset='utf8',\n mysql_auto_increment='1001',\n)\n\nThe above will generate:\nCREATE TABLE article (\n aid INTEGER UNSIGNED ZEROFILL NOT NULL AUTO_INCREMENT, \n PRIMARY KEY (aid)\n)ENGINE=InnoDB AUTO_INCREMENT=1001 DEFAULT CHARSET=utf8\n\n",
"If your database supports Identity columns*, the starting value can be set like this:\nimport sqlalchemy as sa\n\ntbl = sa.Table(\n 't10494033',\n sa.MetaData(),\n sa.Column('id', sa.Integer, sa.Identity(start=200, always=True), primary_key=True),\n)\n\nResulting in this DDL output:\nCREATE TABLE t10494033 (\n id INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 200), \n PRIMARY KEY (id)\n)\n\nIdentity(..) is ignored if the backend does not support it.\n\n* PostgreSQL 10+, Oracle 12+ and MSSQL, according to the linked documentation above.\n"
] |
[
24,
20,
3,
3,
1,
0
] |
[] |
[] |
[
"auto_increment",
"python",
"sqlalchemy"
] |
stackoverflow_0010494033_auto_increment_python_sqlalchemy.txt
|
Q:
How is the text from this pdf encoded?
I have some pdfs with data about machine parts and i am trying to extract sizes. I extracted the text from a pdf via pypdfium2.
import pypdfium2 as pdfium
pdf = pdfium.PdfDocument("myfile.pdf")
page=pdf[1]
textpage = page.get_textpage()
Most of the text is readable but for some reason the important data is not readable when extracted.
In the extracted string the relevant part is like this
Readable text \r\n\x13\x0c\x10 \x18\x0c\x18 \x0b\x10\x0e\x10\x15\x18\x0f\x10 \x15\x0c\x10 \x14\x0c\x10 \x14\x0c\x15 readable text
I tried also with tika and PyMuPDF. They only give me the questionmarkcharacter for those parts.
I know the mangled part (\r\n\x13\x0c\x10 \x18\x0c\x18 \x0b\x10\x0e\x10\x15\x18\x0f\x10 \x15\x0c\x10 \x14\x0c\x10 \x14\x0c\x15) should be 3,0 8,8 +0,058/0 5,0 4,0 4,5.
My current idea is to make my own encoding table but i wanted to ask if there is a better method and if this looks familiar to someone.
I have about 52 files whith around 200 occurences each.
While the pdfs are not confidential i dont want to post links because it is not my intelectual property.
Update------------------------------
I tried to find out more about the fonts.
from pdfreader import PDFDocument
fd = open("myfile", "rb")
doc = PDFDocument(fd)
page = next(doc.pages())
font_keys=sorted(page.Resources.Font.keys())
for font_key in font_keys:
font = page.Resources.Font[font_key]
print(f"{font_key}: {font.Subtype}, {font.BaseFont}, {font.Encoding}")
gives:
R13: Type0, UHIIUQ+MetaPlusBold-Roman-Identity-H, Identity-H
R17: Type0, EWGLNL+MetaPlusBold-Caps-Identity-H, Identity-H
R20: Type1, NRVKIY+Meta-LightLF, {'Type': 'Encoding', 'BaseEncoding': 'WinAnsiEncoding', 'Differences': [33, 'agrave', 'degree', 39, 'quoteright', 177, 'endash']}
R24: Type0, IKRCND+MetaPlusBold-Italic-Identity-H, Identity-H
-Edit------
I am not interested in help tranlating it manually. I can do that by myself. i am interested in a solution that works by script. For example a script that extracts fonts with codemaps from the pdf and then uses those to translate the unreadable parts
A:
This is not uncommon CID CMAP substitution as output in python notation, and is usua;;y specific to a single font with 6 random ID e.g.UHIIUQ+Font name
often found for subsetting fonts that have a limited range of characters.
should be 3,0 8,8 +0,058/0 5,0 4,0 4,5
\r\n\ = cR Nl (windows line feed \x0d\x0a)
\x13 has been mapped to 3
\x0c has been mapped to ,
\x10 has been mapped to 0
(literal nbsp)
\x18 = 8
\x0c = ,
\x18 = 8
(literal nbsp)
\x0b has been mapped to +
\x10 = 0
\x0e has been mapped to , (very odd see \x0c)
\x10 = 0
\x15 = 5
\x18 = 8
\x0f has been mapped to /
\x10 = 0
(literal nbsp)
\x15 etc......................
\x0c
\x10
\x14
\x0c
\x10
\x14
\x0c
\x15
so \x0# are low order control codes & punctuation
and \x1# are digits
unknown if \x2# are used for letters, the CMAP table should be queried for the full details
\x0e has been mapped to , (very odd see \x0c)
I suspect as its different that should possibly be decimal separator dot ?
A:
Here is example code to get the source of a font's CMAP with PyMuPDF:
import fitz
doc = fitz.open("some.pdf")
# assume that we know a font's xref already
# extract the xref of its CMAP:
cmap_xref = doc.xref_get_key(xref, "ToUnicode")[1] # second string is 'nnn 0 R'
if cmap_xref.endswith("0 R"): # check if a CMAP exists at all
cxref = int(cmap_xref.split()[0])
else:
raise ValueError("no CMAP found")
print(doc.xref_stream(cxref).decode()) # convert bytes to string
/CIDInit /ProcSet findresource begin
12 dict begin
begincmap
/CMapType 2 def
/CMapName/R63 def
1 begincodespacerange
<00><ff>
endcodespacerange
12 beginbfrange
<20><20><0020>
<2e><2e><002e>
<30><31><0030>
<43><46><0043>
<49><49><0049>
<4c><4d><004c>
<4f><50><004f>
<61><61><0061>
<63><69><0063>
<6b><70><006b>
<72><76><0072>
<78><79><0078>
endbfrange
endcmap
CMapName currentdict /CMap defineresource pop
end end
|
How is the text from this pdf encoded?
|
I have some pdfs with data about machine parts and i am trying to extract sizes. I extracted the text from a pdf via pypdfium2.
import pypdfium2 as pdfium
pdf = pdfium.PdfDocument("myfile.pdf")
page=pdf[1]
textpage = page.get_textpage()
Most of the text is readable but for some reason the important data is not readable when extracted.
In the extracted string the relevant part is like this
Readable text \r\n\x13\x0c\x10 \x18\x0c\x18 \x0b\x10\x0e\x10\x15\x18\x0f\x10 \x15\x0c\x10 \x14\x0c\x10 \x14\x0c\x15 readable text
I tried also with tika and PyMuPDF. They only give me the questionmarkcharacter for those parts.
I know the mangled part (\r\n\x13\x0c\x10 \x18\x0c\x18 \x0b\x10\x0e\x10\x15\x18\x0f\x10 \x15\x0c\x10 \x14\x0c\x10 \x14\x0c\x15) should be 3,0 8,8 +0,058/0 5,0 4,0 4,5.
My current idea is to make my own encoding table but i wanted to ask if there is a better method and if this looks familiar to someone.
I have about 52 files whith around 200 occurences each.
While the pdfs are not confidential i dont want to post links because it is not my intelectual property.
Update------------------------------
I tried to find out more about the fonts.
from pdfreader import PDFDocument
fd = open("myfile", "rb")
doc = PDFDocument(fd)
page = next(doc.pages())
font_keys=sorted(page.Resources.Font.keys())
for font_key in font_keys:
font = page.Resources.Font[font_key]
print(f"{font_key}: {font.Subtype}, {font.BaseFont}, {font.Encoding}")
gives:
R13: Type0, UHIIUQ+MetaPlusBold-Roman-Identity-H, Identity-H
R17: Type0, EWGLNL+MetaPlusBold-Caps-Identity-H, Identity-H
R20: Type1, NRVKIY+Meta-LightLF, {'Type': 'Encoding', 'BaseEncoding': 'WinAnsiEncoding', 'Differences': [33, 'agrave', 'degree', 39, 'quoteright', 177, 'endash']}
R24: Type0, IKRCND+MetaPlusBold-Italic-Identity-H, Identity-H
-Edit------
I am not interested in help tranlating it manually. I can do that by myself. i am interested in a solution that works by script. For example a script that extracts fonts with codemaps from the pdf and then uses those to translate the unreadable parts
|
[
"This is not uncommon CID CMAP substitution as output in python notation, and is usua;;y specific to a single font with 6 random ID e.g.UHIIUQ+Font name\noften found for subsetting fonts that have a limited range of characters.\nshould be 3,0 8,8 +0,058/0 5,0 4,0 4,5\n\\r\\n\\ = cR Nl (windows line feed \\x0d\\x0a)\n\\x13 has been mapped to 3\n\\x0c has been mapped to ,\n\\x10 has been mapped to 0\n (literal nbsp)\n\\x18 = 8\n\\x0c = ,\n\\x18 = 8\n (literal nbsp)\n\\x0b has been mapped to +\n\\x10 = 0\n\\x0e has been mapped to , (very odd see \\x0c)\n\\x10 = 0\n\\x15 = 5\n\\x18 = 8\n\\x0f has been mapped to /\n\\x10 = 0\n (literal nbsp)\n\\x15 etc......................\n\\x0c\n\\x10\n \n\\x14\n\\x0c\n\\x10\n \n\\x14\n\\x0c\n\\x15\n\nso \\x0# are low order control codes & punctuation\nand \\x1# are digits\nunknown if \\x2# are used for letters, the CMAP table should be queried for the full details\n\\x0e has been mapped to , (very odd see \\x0c)\nI suspect as its different that should possibly be decimal separator dot ?\n",
"Here is example code to get the source of a font's CMAP with PyMuPDF:\nimport fitz\ndoc = fitz.open(\"some.pdf\")\n# assume that we know a font's xref already\n# extract the xref of its CMAP:\ncmap_xref = doc.xref_get_key(xref, \"ToUnicode\")[1] # second string is 'nnn 0 R'\nif cmap_xref.endswith(\"0 R\"): # check if a CMAP exists at all\n cxref = int(cmap_xref.split()[0])\nelse:\n raise ValueError(\"no CMAP found\")\nprint(doc.xref_stream(cxref).decode()) # convert bytes to string\n/CIDInit /ProcSet findresource begin\n12 dict begin\nbegincmap\n/CMapType 2 def\n/CMapName/R63 def\n1 begincodespacerange\n<00><ff>\nendcodespacerange\n12 beginbfrange\n<20><20><0020>\n<2e><2e><002e>\n<30><31><0030>\n<43><46><0043>\n<49><49><0049>\n<4c><4d><004c>\n<4f><50><004f>\n<61><61><0061>\n<63><69><0063>\n<6b><70><006b>\n<72><76><0072>\n<78><79><0078>\nendbfrange\nendcmap\nCMapName currentdict /CMap defineresource pop\nend end\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"encoding",
"pdf_extraction",
"python"
] |
stackoverflow_0074534840_encoding_pdf_extraction_python.txt
|
Q:
Calculate Average Dynamically in Python
I have 3 variables A, B, C. I need to calculate the average of the values of A, B and C.
But sometimes I want to exclude a variable when it has no data.
for example,
if all variable have data, formula should be (A+B+C)/3.
if A didn't have data, formula should be like (B+C)/2.
Any suggestions?
I tried avg() function, but that didn't worked as expected.
A:
you can use the following, which basically excludes the value from the list if its None
import numpy as np
A=None
B=5
C=4
np.mean([num for num in[A,B,C] if num is not None])
>>> 4.5
|
Calculate Average Dynamically in Python
|
I have 3 variables A, B, C. I need to calculate the average of the values of A, B and C.
But sometimes I want to exclude a variable when it has no data.
for example,
if all variable have data, formula should be (A+B+C)/3.
if A didn't have data, formula should be like (B+C)/2.
Any suggestions?
I tried avg() function, but that didn't worked as expected.
|
[
"you can use the following, which basically excludes the value from the list if its None\nimport numpy as np\nA=None\nB=5\nC=4\n\nnp.mean([num for num in[A,B,C] if num is not None])\n>>> 4.5\n\n"
] |
[
0
] |
[] |
[] |
[
"average",
"python",
"python_3.x"
] |
stackoverflow_0074581841_average_python_python_3.x.txt
|
Q:
How to extract multiple strings from list spaced apart
I have the following list:
lst = ['L38A', '38', 'L', 'A', '-6.7742', '-3.5671', '0.00226028', '0.4888', 'L38C', '38', 'L', 'C', '-7.7904', '-6.6306', '0.0', '0.4888', 'L38D', '38', 'L', 'D', '-6.3475', '-3.0068', '0.00398551', '0.4888', 'L38E', '38', 'L', 'E', '-6.4752', '-3.4645', '0.00250913', '0.4888']
I'm looking to extract the first element (posiiton 0) in the list ('L38A') and the 5th element (position 4) (-6.7742) multiple times:
Desired output
[('L38A','-6.7742'), ('L38C','-7.7904'), ('L38D','-6.3475')...('L38E','-6.4752')]
I have tried:
lst[::5]
A:
This works:
a=[(x,y) for x,y in zip(*[iter(lst[::4])]*2)]
print(a)
# [('L38A', '-6.7742'), ('L38C', '-7.7904'), ('L38D', '-6.3475'), ('L38E', '-6.4752')]
A:
We can handle this via a zip operation and list comprehension:
lst = ['L38A', '38', 'L', 'A', '-6.7742', '-3.5671', '0.00226028', '0.4888', 'L38C', '38', 'L', 'C', '-7.7904', '-6.6306', '0.0', '0.4888', 'L38D', '38', 'L', 'D', '-6.3475', '-3.0068', '0.00398551', '0.4888', 'L38E', '38', 'L', 'E', '-6.4752', '-3.4645', '0.00250913', '0.4888']
it = [iter(lst)] * 8
output = [(x[0], x[4]) for x in zip(*it)]
print(output)
This prints:
[('L38A', '-6.7742'), ('L38C', '-7.7904'), ('L38D', '-6.3475'), ('L38E', '-6.4752')]
The first zip generates a list of 8 tuples, with 8 being the number of values in each cycle. The comprehension then generates a list of 2-tuples containing the first and fifth elements.
|
How to extract multiple strings from list spaced apart
|
I have the following list:
lst = ['L38A', '38', 'L', 'A', '-6.7742', '-3.5671', '0.00226028', '0.4888', 'L38C', '38', 'L', 'C', '-7.7904', '-6.6306', '0.0', '0.4888', 'L38D', '38', 'L', 'D', '-6.3475', '-3.0068', '0.00398551', '0.4888', 'L38E', '38', 'L', 'E', '-6.4752', '-3.4645', '0.00250913', '0.4888']
I'm looking to extract the first element (posiiton 0) in the list ('L38A') and the 5th element (position 4) (-6.7742) multiple times:
Desired output
[('L38A','-6.7742'), ('L38C','-7.7904'), ('L38D','-6.3475')...('L38E','-6.4752')]
I have tried:
lst[::5]
|
[
"This works:\na=[(x,y) for x,y in zip(*[iter(lst[::4])]*2)]\n\nprint(a)\n\n# [('L38A', '-6.7742'), ('L38C', '-7.7904'), ('L38D', '-6.3475'), ('L38E', '-6.4752')]\n\n",
"We can handle this via a zip operation and list comprehension:\nlst = ['L38A', '38', 'L', 'A', '-6.7742', '-3.5671', '0.00226028', '0.4888', 'L38C', '38', 'L', 'C', '-7.7904', '-6.6306', '0.0', '0.4888', 'L38D', '38', 'L', 'D', '-6.3475', '-3.0068', '0.00398551', '0.4888', 'L38E', '38', 'L', 'E', '-6.4752', '-3.4645', '0.00250913', '0.4888']\nit = [iter(lst)] * 8\noutput = [(x[0], x[4]) for x in zip(*it)]\nprint(output)\n\nThis prints:\n[('L38A', '-6.7742'), ('L38C', '-7.7904'), ('L38D', '-6.3475'), ('L38E', '-6.4752')]\n\nThe first zip generates a list of 8 tuples, with 8 being the number of values in each cycle. The comprehension then generates a list of 2-tuples containing the first and fifth elements.\n"
] |
[
2,
2
] |
[] |
[] |
[
"list",
"python"
] |
stackoverflow_0074581912_list_python.txt
|
Q:
Running poetry fails with /usr/bin/env: ‘python’: No such file or directory
I just installed poetry with the following install script
curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python3
However, when I execute poetry it fails with the following error
$ poetry
/usr/bin/env: ‘python’: No such file or directory
I recently upgraded to ubuntu 20.04, is this an issue with the upgrade or with poetry?
A:
poetry is dependent on whatever python is and doesn't attempt to use a specific version of python unless otherwise specified.
The above issue will exist on ubuntu systems moving forward 20.04 onwards as python2.7 is deprecated and the python command does not map to python3.x
You'll find specifying an alias for python to python3 won't work ( unless, perhaps you specify this in your bashrc instead of any other shell run command file ) as poetry spins it's own shell to execute commands.
Install the following package instead
sudo apt install python-is-python3
It should be noted that you can install python2.7 if you want to and poetry should run fine.
A:
Also an issue on some other Ubuntu versions/variants (Mint 19.3 here).
The python-is-python3 answer from arshbot is a good option, alternatively I found just tweaking the script that invokes poetry fixed it for me: A more delicate approach, but also more fragile in case the script gets updated (so overwritten) in future. So anyway here's that lightweight/fragile option:
Edit the script,
vi ~/.poetry/bin/poetry
(other editors are available etc) and change the top line:
#!/usr/bin/env python
becomes
#!/usr/bin/env python3
sorted!
This is only likely to be needed as a temporary workaround considering finswimmer's comment, from which it seems poetry will be more intelligent about using python3 in future in this situation.
A:
FOR MAC USERS
Run this:
ls -l /usr/local/bin/python*
You should get something like this:
lrwxr-xr-x 1 irfan admin 34 Nov 11 16:32 /usr/local/bin/python3 -> ../Cellar/python/3.7.5/bin/python3
lrwxr-xr-x 1 irfan admin 41 Nov 11 16:32 /usr/local/bin/python3-config -> ../Cellar/python/3.7.5/bin/python3-config
lrwxr-xr-x 1 irfan admin 36 Nov 11 16:32 /usr/local/bin/python3.7 -> ../Cellar/python/3.7.5/bin/python3.7
lrwxr-xr-x 1 irfan admin 43 Nov 11 16:32 /usr/local/bin/python3.7-config -> ../Cellar/python/3.7.5/bin/python3.7-config
lrwxr-xr-x 1 irfan admin 37 Nov 11 16:32 /usr/local/bin/python3.7m -> ../Cellar/python/3.7.5/bin/python3.7m
lrwxr-xr-x 1 irfan admin 44 Nov 11 16:32 /usr/local/bin/python3.7m-config -> ../Cellar/python/3.7.5/bin/python3.7m-config
Change the default python symlink to the version you want to use from above.
Note that, we only need to choose the one that ends with python3.*. Please avoid using the ones' that end with config or python3.*m or python3.*m-config.
Run this:
ln -s -f /usr/local/bin/python3.7 /usr/local/bin/python
Check if it's working:
python --version # Should output Python 3.7.5
|
Running poetry fails with /usr/bin/env: ‘python’: No such file or directory
|
I just installed poetry with the following install script
curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python3
However, when I execute poetry it fails with the following error
$ poetry
/usr/bin/env: ‘python’: No such file or directory
I recently upgraded to ubuntu 20.04, is this an issue with the upgrade or with poetry?
|
[
"poetry is dependent on whatever python is and doesn't attempt to use a specific version of python unless otherwise specified.\nThe above issue will exist on ubuntu systems moving forward 20.04 onwards as python2.7 is deprecated and the python command does not map to python3.x\nYou'll find specifying an alias for python to python3 won't work ( unless, perhaps you specify this in your bashrc instead of any other shell run command file ) as poetry spins it's own shell to execute commands.\nInstall the following package instead\nsudo apt install python-is-python3\n\nIt should be noted that you can install python2.7 if you want to and poetry should run fine.\n",
"Also an issue on some other Ubuntu versions/variants (Mint 19.3 here).\nThe python-is-python3 answer from arshbot is a good option, alternatively I found just tweaking the script that invokes poetry fixed it for me: A more delicate approach, but also more fragile in case the script gets updated (so overwritten) in future. So anyway here's that lightweight/fragile option:\nEdit the script,\nvi ~/.poetry/bin/poetry\n\n(other editors are available etc) and change the top line:\n#!/usr/bin/env python\n\nbecomes\n#!/usr/bin/env python3\n\nsorted!\nThis is only likely to be needed as a temporary workaround considering finswimmer's comment, from which it seems poetry will be more intelligent about using python3 in future in this situation.\n",
"FOR MAC USERS\nRun this:\nls -l /usr/local/bin/python*\n\nYou should get something like this:\nlrwxr-xr-x 1 irfan admin 34 Nov 11 16:32 /usr/local/bin/python3 -> ../Cellar/python/3.7.5/bin/python3\nlrwxr-xr-x 1 irfan admin 41 Nov 11 16:32 /usr/local/bin/python3-config -> ../Cellar/python/3.7.5/bin/python3-config\nlrwxr-xr-x 1 irfan admin 36 Nov 11 16:32 /usr/local/bin/python3.7 -> ../Cellar/python/3.7.5/bin/python3.7\nlrwxr-xr-x 1 irfan admin 43 Nov 11 16:32 /usr/local/bin/python3.7-config -> ../Cellar/python/3.7.5/bin/python3.7-config\nlrwxr-xr-x 1 irfan admin 37 Nov 11 16:32 /usr/local/bin/python3.7m -> ../Cellar/python/3.7.5/bin/python3.7m\nlrwxr-xr-x 1 irfan admin 44 Nov 11 16:32 /usr/local/bin/python3.7m-config -> ../Cellar/python/3.7.5/bin/python3.7m-config\n\nChange the default python symlink to the version you want to use from above.\nNote that, we only need to choose the one that ends with python3.*. Please avoid using the ones' that end with config or python3.*m or python3.*m-config.\nRun this:\nln -s -f /usr/local/bin/python3.7 /usr/local/bin/python\n\nCheck if it's working:\npython --version # Should output Python 3.7.5\n\n"
] |
[
23,
4,
1
] |
[] |
[] |
[
"python",
"python_poetry",
"ubuntu_20.04"
] |
stackoverflow_0061921940_python_python_poetry_ubuntu_20.04.txt
|
Q:
ThreadPoolExecutor - How can you bring results to Excel?
I'm using the Yahoo finance API to extract data using ThreadPoolExecutor. Can anyone show me how to bring the output to excel if possible? Thanks
Code
import yfinance as yf
from concurrent.futures import ThreadPoolExecutor
def get_stats(ticker):
info = yf.Tickers(ticker).tickers[ticker].info
print(f"{ticker} {info['currentPrice']} {info['marketCap']}")
ticker_list = ['AAPL', 'ORCL', 'PREM.L', 'UKOG.L', 'KOD.L', 'TOM.L', 'VELA.L', 'MSFT', 'AMZN', 'GOOG']
with ThreadPoolExecutor() as executor:
executor.map(get_stats, ticker_list)
Output
VELA.L 0.035 6004320
UKOG.L 0.1139 18496450
PREM.L 0.461 89516976
ORCL 76.755 204970377216
MSFT 294.8669 2210578825216
TOM.L 0.604 10558403
KOD.L 0.3 47496900
AMZN 3152.02 1603886514176
AAPL 171.425 2797553057792
GOOG 2698.05 1784584732672
A:
First, you can make a empty list and feed id with every returned result by the API, then construct a dataframe from it and finally use pandas.to_excel to make the Excel spreadsheet.
Try this :
import yfinance as yf
from concurrent.futures import ThreadPoolExecutor
import pandas as pd
list_of_futures= []
def get_stats(ticker):
info = yf.Tickers(ticker).tickers[ticker].info
s= f"{ticker} {info['currentPrice']} {info['marketCap']}"
list_of_futures.append(s)
ticker_list = ['AAPL', 'ORCL', 'PREM.L', 'UKOG.L', 'KOD.L', 'TOM.L', 'VELA.L', 'MSFT', 'AMZN', 'GOOG']
with ThreadPoolExecutor() as executor:
executor.map(get_stats, ticker_list)
(
pd.DataFrame(list_of_futures)
[0].str.split(expand=True)
.rename(columns={0: "Ticker", 1: "Price", 2: "Market Cap"})
.to_excel("yahoo_futures.xlsx", index=False)
)
# Output (dataframe)
Ticker Price Market Cap
0 UKOG.L 0.064 14417024
1 VELA.L 0.0205 3331721
2 AMZN 93.41 952940888064
3 GOOG 97.6 1261313982464
4 ORCL 82.72 223027183616
5 KOD.L 0.28 47330360
6 AAPL 148.11 2356148699136
7 MSFT 247.49 1844906819584
8 TOM.L 0.455 9117245
9 PREM.L 0.57 127782592
|
ThreadPoolExecutor - How can you bring results to Excel?
|
I'm using the Yahoo finance API to extract data using ThreadPoolExecutor. Can anyone show me how to bring the output to excel if possible? Thanks
Code
import yfinance as yf
from concurrent.futures import ThreadPoolExecutor
def get_stats(ticker):
info = yf.Tickers(ticker).tickers[ticker].info
print(f"{ticker} {info['currentPrice']} {info['marketCap']}")
ticker_list = ['AAPL', 'ORCL', 'PREM.L', 'UKOG.L', 'KOD.L', 'TOM.L', 'VELA.L', 'MSFT', 'AMZN', 'GOOG']
with ThreadPoolExecutor() as executor:
executor.map(get_stats, ticker_list)
Output
VELA.L 0.035 6004320
UKOG.L 0.1139 18496450
PREM.L 0.461 89516976
ORCL 76.755 204970377216
MSFT 294.8669 2210578825216
TOM.L 0.604 10558403
KOD.L 0.3 47496900
AMZN 3152.02 1603886514176
AAPL 171.425 2797553057792
GOOG 2698.05 1784584732672
|
[
"First, you can make a empty list and feed id with every returned result by the API, then construct a dataframe from it and finally use pandas.to_excel to make the Excel spreadsheet.\nTry this :\nimport yfinance as yf\nfrom concurrent.futures import ThreadPoolExecutor\nimport pandas as pd\n\n\nlist_of_futures= []\n\ndef get_stats(ticker):\n info = yf.Tickers(ticker).tickers[ticker].info\n s= f\"{ticker} {info['currentPrice']} {info['marketCap']}\"\n list_of_futures.append(s)\n\nticker_list = ['AAPL', 'ORCL', 'PREM.L', 'UKOG.L', 'KOD.L', 'TOM.L', 'VELA.L', 'MSFT', 'AMZN', 'GOOG']\n\nwith ThreadPoolExecutor() as executor:\n executor.map(get_stats, ticker_list)\n \n(\n pd.DataFrame(list_of_futures)\n [0].str.split(expand=True)\n .rename(columns={0: \"Ticker\", 1: \"Price\", 2: \"Market Cap\"})\n .to_excel(\"yahoo_futures.xlsx\", index=False)\n)\n\n# Output (dataframe)\n Ticker Price Market Cap\n0 UKOG.L 0.064 14417024\n1 VELA.L 0.0205 3331721\n2 AMZN 93.41 952940888064\n3 GOOG 97.6 1261313982464\n4 ORCL 82.72 223027183616\n5 KOD.L 0.28 47330360\n6 AAPL 148.11 2356148699136\n7 MSFT 247.49 1844906819584\n8 TOM.L 0.455 9117245\n9 PREM.L 0.57 127782592\n\n"
] |
[
2
] |
[] |
[] |
[
"pandas",
"python",
"threadpoolexecutor"
] |
stackoverflow_0074581604_pandas_python_threadpoolexecutor.txt
|
Q:
How to print 'x' if the input int is b/w the desired numbers in python?
So, I started making a program and now I require it to print anything I like if the number is between 1-100, How to I make the program realize that it needs to print it if the number's between 90 and 100?
#This is a sample code
F2 = int(input())
if F2 == range(90 , 100):
print("A")
else:
print("BRUH")
I'm really new to this, I'll be very thankful if someone could help me
A:
You're checking if f2 is equal to range(90,100), correct form is if it's IN the range(90,100).
if F2 in range(90,101): #last number is not included in range(101 won't be included)
print('A')
also, if you try
f2 = range(90,100)
print(f2)
you'll understand what f2 == range(90,100) means.
if you use in, code will check if f2 in [90,91,92...100] , if f2 will be equal any of them, then returns True.
A:
Simply do an if else check and if it falls under the range of 90 to 100 then it will print.
#This is a sample code
F2 = int(input())
if F2 >=90 and F2 <=100:
print("A")
else:
print("BRUH")
A:
Use in operator for membership test. == is comparison operator for equality.
print("A" if (F2 := int(input())) in range(90,101) else "BRUH")
|
How to print 'x' if the input int is b/w the desired numbers in python?
|
So, I started making a program and now I require it to print anything I like if the number is between 1-100, How to I make the program realize that it needs to print it if the number's between 90 and 100?
#This is a sample code
F2 = int(input())
if F2 == range(90 , 100):
print("A")
else:
print("BRUH")
I'm really new to this, I'll be very thankful if someone could help me
|
[
"You're checking if f2 is equal to range(90,100), correct form is if it's IN the range(90,100).\nif F2 in range(90,101): #last number is not included in range(101 won't be included)\n print('A')\n\nalso, if you try\nf2 = range(90,100)\nprint(f2)\n\nyou'll understand what f2 == range(90,100) means.\nif you use in, code will check if f2 in [90,91,92...100] , if f2 will be equal any of them, then returns True.\n",
"Simply do an if else check and if it falls under the range of 90 to 100 then it will print.\n#This is a sample code\nF2 = int(input())\nif F2 >=90 and F2 <=100:\n print(\"A\")\nelse:\n print(\"BRUH\")\n\n",
"Use in operator for membership test. == is comparison operator for equality.\nprint(\"A\" if (F2 := int(input())) in range(90,101) else \"BRUH\")\n\n"
] |
[
2,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074581868_python.txt
|
Q:
Web scrape data from exchange using API
I am looking to web scrape the second table containing the "Number of Insider Shares Traded" from the following website:
https://www.nasdaq.com/market-activity/stocks/aapl/insider-activity
Preferably I need someone to show how to use the Nasdaq api if possible. I believe the way I'd normally webscrape (using beautifulSoup) would be inefficient for this task.
I have some existing code that helps obtain data from the same website using it's api but for different information. Preferably, I just need a different api endpoint and then make some tweaks following simlar structure to the below code:
import requests
import json
nasdaq_dict = {}
url = 'https://api.nasdaq.com/api/company/AAPL/institutional-holdings?limit=15&type=TOTAL&sortColumn=marketValue&sortOrder=DESC'
headers = {
'accept': 'application/json, text/plain, */*',
'origin': 'https://www.nasdaq.com',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36'
}
r = requests.get(url, headers=headers)
nasdaq_dict['activePositions'] = r.json()['data']['activePositions']['rows']
nasdaq_dict['newSoldOutPositions'] = r.json()['data']['newSoldOutPositions']['rows']
with open('AAPL_institutional_holdings.json', 'w') as f:
json.dump(nasdaq_dict, f, indent=4)
A:
Here is one way of getting that data (as a dictionary: please say if you want it as a table):
import requests
headers = {
'accept-language': 'en-US,en;q=0.9',
'origin': 'https://www.nasdaq.com/',
'referer': 'https://www.nasdaq.com/',
'accept': 'application/json, text/plain, */*',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'
}
data = requests.get('https://api.nasdaq.com/api/company/AAPL/insider-trades?limit=15&type=ALL&sortColumn=lastDate&sortOrder=DESC', headers=headers).json()['data']['numberOfSharesTraded']
print(data)
Result in terminal:
{'headers': {'insiderTrade': 'INSIDER TRADE', 'months3': '3 MONTHS', 'months12': '12 MONTHS'}, 'rows': [{'insiderTrade': 'Number of Shares Bought', 'months3': '0', 'months12': '0'}, {'insiderTrade': 'Number of Shares Sold', 'months3': '1,317,881', 'months12': '1,986,819'}, {'insiderTrade': 'Total Shares Traded', 'months3': '1,317,881', 'months12': '1,986,819'}, {'insiderTrade': 'Net Activity', 'months3': '(1,317,881)', 'months12': '(1,986,819)'}]}
|
Web scrape data from exchange using API
|
I am looking to web scrape the second table containing the "Number of Insider Shares Traded" from the following website:
https://www.nasdaq.com/market-activity/stocks/aapl/insider-activity
Preferably I need someone to show how to use the Nasdaq api if possible. I believe the way I'd normally webscrape (using beautifulSoup) would be inefficient for this task.
I have some existing code that helps obtain data from the same website using it's api but for different information. Preferably, I just need a different api endpoint and then make some tweaks following simlar structure to the below code:
import requests
import json
nasdaq_dict = {}
url = 'https://api.nasdaq.com/api/company/AAPL/institutional-holdings?limit=15&type=TOTAL&sortColumn=marketValue&sortOrder=DESC'
headers = {
'accept': 'application/json, text/plain, */*',
'origin': 'https://www.nasdaq.com',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36'
}
r = requests.get(url, headers=headers)
nasdaq_dict['activePositions'] = r.json()['data']['activePositions']['rows']
nasdaq_dict['newSoldOutPositions'] = r.json()['data']['newSoldOutPositions']['rows']
with open('AAPL_institutional_holdings.json', 'w') as f:
json.dump(nasdaq_dict, f, indent=4)
|
[
"Here is one way of getting that data (as a dictionary: please say if you want it as a table):\nimport requests\n\nheaders = {\n 'accept-language': 'en-US,en;q=0.9',\n 'origin': 'https://www.nasdaq.com/',\n 'referer': 'https://www.nasdaq.com/',\n 'accept': 'application/json, text/plain, */*',\n 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'\n}\n\ndata = requests.get('https://api.nasdaq.com/api/company/AAPL/insider-trades?limit=15&type=ALL&sortColumn=lastDate&sortOrder=DESC', headers=headers).json()['data']['numberOfSharesTraded']\nprint(data)\n\nResult in terminal:\n{'headers': {'insiderTrade': 'INSIDER TRADE', 'months3': '3 MONTHS', 'months12': '12 MONTHS'}, 'rows': [{'insiderTrade': 'Number of Shares Bought', 'months3': '0', 'months12': '0'}, {'insiderTrade': 'Number of Shares Sold', 'months3': '1,317,881', 'months12': '1,986,819'}, {'insiderTrade': 'Total Shares Traded', 'months3': '1,317,881', 'months12': '1,986,819'}, {'insiderTrade': 'Net Activity', 'months3': '(1,317,881)', 'months12': '(1,986,819)'}]}\n\n"
] |
[
1
] |
[] |
[] |
[
"api",
"json",
"python"
] |
stackoverflow_0074581988_api_json_python.txt
|
Q:
How to sort the elements of a list based off associated index python
I am looking to sort MyArray[] of size n elements so that MyArray[n] = n. If the element is missing it should be replaced with a -1. Here is an example:
Input : MyArray = [-1, -1, 6, 1, 9, 3, 2, -1, 4, -1]
Output : [-1, 1, 2, 3, 4, -1, 6, -1, -1, 9]
MyArray = [-1, -1, 6, 1, 9, 3, 2, -1, 4, -1]
MyArrayNew = []
for n in MyArray:
if n <= len(MyArray):
MyArrayNew[n] = n
else:
MyArrayNew[n] = -1
print(MyArrayNew)
Here is my code thus far, any pointers on how to properly code this would be greatly appreciated!
A:
Two ways to sort an array that I know in python
for an inplace sorting: apply the sort() method to your array as
MyArray.sort()
The second way is to use nested FOR ... LOOP and compare values in the array from Index 0 to the final item. I normally use a temp value to keep the previous value, compare it with the current, and swap the values according to the size. The example code below
for i in range(len(MyArray)):
#outer loop
for j in range(i+1, len(MyArray)):
#start from i+1, why because you always want to compare the
previous element with the current element in the outer loop
if(MyArray[i] > MyArray[j]):
temp = MyArray[i]
MyArray[i] = MyArray[j]
MyArray[j] = temp
print(MyArray)
A:
You're making two mistakes.
You use n as an index as well as the value. From the for loop it can be seen that n is the value of each element in the list MyArray. But later on you use this as an index when you call MyArrayNew[n]. When n is -1 there is propably some things that happen that you do not want.
lists indices can only be changed if they already exist. MyArrayNew starts of empty, so you can't say: change the third index to three, because the third index doesn't exist yet.
There are many approaches to solve this problem. I'll give one:
To solve the second problem I suggest appending instead of assigning indices. To solve the first problem, you could use for i in range len(arr):, but I prefer enumerate.
I'll also approach it the other way around: cycle through the indices and check if it should be its index value, or -1.
This results in the following code:
MyArray = [-1, -1, 6, 1, 9, 3, 2, -1, 4, -1]
MyArrayNew = []
for index, value in enumerate(MyArray):
if index in MyArray:
MyArrayNew.append(index)
else:
MyArrayNew.append(-1)
print(MyArrayNew)
|
How to sort the elements of a list based off associated index python
|
I am looking to sort MyArray[] of size n elements so that MyArray[n] = n. If the element is missing it should be replaced with a -1. Here is an example:
Input : MyArray = [-1, -1, 6, 1, 9, 3, 2, -1, 4, -1]
Output : [-1, 1, 2, 3, 4, -1, 6, -1, -1, 9]
MyArray = [-1, -1, 6, 1, 9, 3, 2, -1, 4, -1]
MyArrayNew = []
for n in MyArray:
if n <= len(MyArray):
MyArrayNew[n] = n
else:
MyArrayNew[n] = -1
print(MyArrayNew)
Here is my code thus far, any pointers on how to properly code this would be greatly appreciated!
|
[
"Two ways to sort an array that I know in python\n\nfor an inplace sorting: apply the sort() method to your array as\nMyArray.sort()\nThe second way is to use nested FOR ... LOOP and compare values in the array from Index 0 to the final item. I normally use a temp value to keep the previous value, compare it with the current, and swap the values according to the size. The example code below\n\n for i in range(len(MyArray)):\n #outer loop\n for j in range(i+1, len(MyArray)):\n #start from i+1, why because you always want to compare the \n previous element with the current element in the outer loop\n if(MyArray[i] > MyArray[j]):\n temp = MyArray[i]\n MyArray[i] = MyArray[j]\n MyArray[j] = temp\n print(MyArray) \n\n",
"You're making two mistakes.\n\nYou use n as an index as well as the value. From the for loop it can be seen that n is the value of each element in the list MyArray. But later on you use this as an index when you call MyArrayNew[n]. When n is -1 there is propably some things that happen that you do not want.\nlists indices can only be changed if they already exist. MyArrayNew starts of empty, so you can't say: change the third index to three, because the third index doesn't exist yet.\n\nThere are many approaches to solve this problem. I'll give one:\nTo solve the second problem I suggest appending instead of assigning indices. To solve the first problem, you could use for i in range len(arr):, but I prefer enumerate.\nI'll also approach it the other way around: cycle through the indices and check if it should be its index value, or -1.\nThis results in the following code:\nMyArray = [-1, -1, 6, 1, 9, 3, 2, -1, 4, -1]\nMyArrayNew = []\nfor index, value in enumerate(MyArray):\n if index in MyArray:\n MyArrayNew.append(index)\n else:\n MyArrayNew.append(-1)\n\nprint(MyArrayNew)\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"list",
"python",
"sorting"
] |
stackoverflow_0074581970_list_python_sorting.txt
|
Q:
NameError: name 'array' is not defined in python
I get NameError: name 'array' is not defined in python error when I want to create array, for example:
a = array([1,8,3])
What am I doing wrong? How to use arrays?
A:
You need to import the array method from the module.
from array import array
http://docs.python.org/library/array.html
A:
For basic Python, you should just use a list (as others have already noted).
If you are trying to use NumPy and you want a NumPy array:
import numpy as np
a = np.array([1,8,3])
If you don't know what NumPy is, you probably just want the list.
A:
You probably don't want an array. Try using a list:
a = [1,8,3]
Python lists perform like dynamic arrays in many other languages.
A:
If you need a container to hold a bunch of things, then lists might be your best bet:
a = [1,8,3]
Type
dir([])
from a Python interpreter to see the methods that lists support, such as append, pop, reverse, and sort.
Lists also support list comprehensions and Python's iterable interface:
for x in a:
print x
y = [x ** 2 for x in a]
A:
You need to import the array.
from numpy import array
A:
In python Import problem occurs when you accidentally name your working file the same as the module name. This way the python opens the same file you created using the same as module name which causes a circular loop and eventually throws an error.
This question is asked 10 yrs ago , but it may be helpful for late python learners
A:
If you're trying to use NumPy, use this:
import numpy as np
a = np.array([1, 2, 3])
If not then a list is way more easier:
a = [1, 2, 3]
A:
**from array import ***
myarray=array('i',[10,39,48,38])
|
NameError: name 'array' is not defined in python
|
I get NameError: name 'array' is not defined in python error when I want to create array, for example:
a = array([1,8,3])
What am I doing wrong? How to use arrays?
|
[
"You need to import the array method from the module.\nfrom array import array\nhttp://docs.python.org/library/array.html\n",
"For basic Python, you should just use a list (as others have already noted).\nIf you are trying to use NumPy and you want a NumPy array:\nimport numpy as np\n\na = np.array([1,8,3])\n\nIf you don't know what NumPy is, you probably just want the list.\n",
"You probably don't want an array. Try using a list:\na = [1,8,3]\n\nPython lists perform like dynamic arrays in many other languages.\n",
"If you need a container to hold a bunch of things, then lists might be your best bet:\na = [1,8,3]\n\nType\ndir([])\n\nfrom a Python interpreter to see the methods that lists support, such as append, pop, reverse, and sort.\nLists also support list comprehensions and Python's iterable interface:\nfor x in a:\n print x\n\ny = [x ** 2 for x in a]\n\n",
"You need to import the array.\nfrom numpy import array\n\n",
"In python Import problem occurs when you accidentally name your working file the same as the module name. This way the python opens the same file you created using the same as module name which causes a circular loop and eventually throws an error.\nThis question is asked 10 yrs ago , but it may be helpful for late python learners\n",
"If you're trying to use NumPy, use this:\nimport numpy as np\na = np.array([1, 2, 3])\n\nIf not then a list is way more easier:\na = [1, 2, 3]\n\n",
"**from array import ***\nmyarray=array('i',[10,39,48,38])\n"
] |
[
61,
25,
2,
2,
1,
1,
0,
0
] |
[] |
[] |
[
"arrays",
"python"
] |
stackoverflow_0007098938_arrays_python.txt
|
Q:
communication with my bot discord doesnt work
Hello im trying to communicate with my bot discord but its doesnt answer
the bot is online but no answer here the following code :
import discord
client = discord.Client(intents=discord.Intents.default())
client.run("token")
@client.event
async def on_message(message):
if message.content == "ping":
await message.channel.send("pong")
A:
You need to enable the message content intent.
add this in your code under your intents definitions
intents.message_content = True
then head to the developer dashboard
and enable the Message Content at the Privileged Intents after that your code should work ;-)
A:
Having your bot to answer to sent messages requires the message_content intent.
With intents=discord.Intents.default() following intents are DISABLED:
self.presences (= having your bot to see rich presence, e.g. what game's being played)
self.members (= having your bot to see the user of a guild)
self.message_content (= having your bot to see the content of messages)
You can now enable all mentioned intents or only specific intents. If you want to send a message, in response to a sent message, you need the intent self.message_content.
You can also add all intents to avoid any problems with them in the future (Note that after a certain amount of Discord servers you need to apply to use all privileged intents.)
intents = discord.Intents.all()
client = discord.Client(intents=intents)
You should consider activating the intents:
Visit the developer portal > choose your app > Privileged Gateway Intents.
For further programming in Discord.py, consider reading the docs, as there is a new version of Discord.py.
A:
The message_content intent mentioned above is necessary, but it's not the only thing wrong here.
When you call client.run(), nothing below it will execute until the client goes down. This means that your on_message event is never created. client.run() should be the very last line in your file.
|
communication with my bot discord doesnt work
|
Hello im trying to communicate with my bot discord but its doesnt answer
the bot is online but no answer here the following code :
import discord
client = discord.Client(intents=discord.Intents.default())
client.run("token")
@client.event
async def on_message(message):
if message.content == "ping":
await message.channel.send("pong")
|
[
"You need to enable the message content intent.\nadd this in your code under your intents definitions\nintents.message_content = True\n\nthen head to the developer dashboard\nand enable the Message Content at the Privileged Intents after that your code should work ;-)\n",
"Having your bot to answer to sent messages requires the message_content intent.\nWith intents=discord.Intents.default() following intents are DISABLED:\n\nself.presences (= having your bot to see rich presence, e.g. what game's being played)\nself.members (= having your bot to see the user of a guild)\nself.message_content (= having your bot to see the content of messages)\n\nYou can now enable all mentioned intents or only specific intents. If you want to send a message, in response to a sent message, you need the intent self.message_content.\nYou can also add all intents to avoid any problems with them in the future (Note that after a certain amount of Discord servers you need to apply to use all privileged intents.)\nintents = discord.Intents.all()\nclient = discord.Client(intents=intents)\n\nYou should consider activating the intents:\nVisit the developer portal > choose your app > Privileged Gateway Intents.\nFor further programming in Discord.py, consider reading the docs, as there is a new version of Discord.py.\n",
"The message_content intent mentioned above is necessary, but it's not the only thing wrong here.\nWhen you call client.run(), nothing below it will execute until the client goes down. This means that your on_message event is never created. client.run() should be the very last line in your file.\n"
] |
[
2,
1,
1
] |
[] |
[] |
[
"discord.py",
"python"
] |
stackoverflow_0074572757_discord.py_python.txt
|
Q:
How do I express an extended subtype of a generic object in Python?
I want to set an attribute on an object, but keep the rest of the object intact, e.g.
from typing import cast, TypeVar, Generic
T = TypeVar("T")
class HasFoo(Generic[T]):
foo: str
def set_foo_on_obj(obj: T) -> HasFoo[T]:
setattr(obj, 'foo', 'some_value')
return cast(HasFoo[T], obj)
def func(a: int) -> int:
return a
func_with_foo = set_foo_on_obj(func)
Here I'm trying to add the attribute foo, and tell the type checker "it's the same object as before, but has a foo attribute now.
But the above example simple erases other properties of the callable.
A:
As mentioned by @paweł-rubin, there is no elegant/direct way that generalizes over any given type so long as intersection types are missing from the type system.
You can write workarounds with different degrees of complexity for specific use cases though using structural subtyping with what is already offered by typing.Protocol. If it is a simple callable you want to "enhance" with your added protocol, you can do something like this:
from collections.abc import Callable
from typing import Generic, ParamSpec, Protocol, TypeVar, cast
T = TypeVar("T")
P = ParamSpec("P")
class HasFoo(Protocol):
foo: str
class CallableWithFoo(HasFoo, Generic[P, T]):
def __call__(self, *args: P.args, **kwargs: P.kwargs) -> T: ...
def set_foo_on_func(function: Callable[P, T]) -> CallableWithFoo[P, T]:
function.foo = "some_value"
return cast(CallableWithFoo[P, T], function)
def func(a: int) -> int:
return a
func_with_foo = set_foo_on_func(func)
reveal_type(func_with_foo) # CallableWithFoo[[a: builtins.int], builtins.int]
reveal_type(func_with_foo(1)) # builtins.int
reveal_type(func_with_foo.foo) # builtins.str
The use of typing.ParamSpec in our generic class allows retaining the callable signature after decoration.
Obviously, other types (not Callable subtypes) will require other protocol inheritance. But this is probably as good as it gets without proper intersection types.
There is also no way around typing.cast IMO because dynamic attribute assignment is ignored by static type checkers for obvious reasons.
EDIT: Changed the setattr(function, "foo", "some_value") to regular attribute assignment. Thanks @SUTerliakov for pointing it out.
|
How do I express an extended subtype of a generic object in Python?
|
I want to set an attribute on an object, but keep the rest of the object intact, e.g.
from typing import cast, TypeVar, Generic
T = TypeVar("T")
class HasFoo(Generic[T]):
foo: str
def set_foo_on_obj(obj: T) -> HasFoo[T]:
setattr(obj, 'foo', 'some_value')
return cast(HasFoo[T], obj)
def func(a: int) -> int:
return a
func_with_foo = set_foo_on_obj(func)
Here I'm trying to add the attribute foo, and tell the type checker "it's the same object as before, but has a foo attribute now.
But the above example simple erases other properties of the callable.
|
[
"As mentioned by @paweł-rubin, there is no elegant/direct way that generalizes over any given type so long as intersection types are missing from the type system.\nYou can write workarounds with different degrees of complexity for specific use cases though using structural subtyping with what is already offered by typing.Protocol. If it is a simple callable you want to \"enhance\" with your added protocol, you can do something like this:\nfrom collections.abc import Callable\nfrom typing import Generic, ParamSpec, Protocol, TypeVar, cast\n\n\nT = TypeVar(\"T\")\nP = ParamSpec(\"P\")\n\n\nclass HasFoo(Protocol):\n foo: str\n\n\nclass CallableWithFoo(HasFoo, Generic[P, T]):\n def __call__(self, *args: P.args, **kwargs: P.kwargs) -> T: ...\n\n\ndef set_foo_on_func(function: Callable[P, T]) -> CallableWithFoo[P, T]:\n function.foo = \"some_value\"\n return cast(CallableWithFoo[P, T], function)\n\n\ndef func(a: int) -> int:\n return a\n\n\nfunc_with_foo = set_foo_on_func(func)\n\nreveal_type(func_with_foo) # CallableWithFoo[[a: builtins.int], builtins.int]\nreveal_type(func_with_foo(1)) # builtins.int\nreveal_type(func_with_foo.foo) # builtins.str\n\nThe use of typing.ParamSpec in our generic class allows retaining the callable signature after decoration.\nObviously, other types (not Callable subtypes) will require other protocol inheritance. But this is probably as good as it gets without proper intersection types.\nThere is also no way around typing.cast IMO because dynamic attribute assignment is ignored by static type checkers for obvious reasons.\nEDIT: Changed the setattr(function, \"foo\", \"some_value\") to regular attribute assignment. Thanks @SUTerliakov for pointing it out.\n"
] |
[
2
] |
[] |
[] |
[
"mypy",
"python",
"python_typing"
] |
stackoverflow_0074570598_mypy_python_python_typing.txt
|
Q:
Importing Numpy into Sublime Text 3
I'm new to coding and I have been learning it on Jupyter. I have anaconda, Sublime Text 3, and the numpy package installed on my Mac.
On Jupyter, we would import numpy by simply typing
import numpy as np
However, this doesnt seem to work on Sublime as I get the error ModuleNotFoundError: No module named 'numpy'
I would appreciate it if someone could guide me on how to get this working. Thanks!
A:
If you have Annaconda, install Spyder.
If you continue to have this problem, you could check all the lib install from anaconda.
I suggest you to install nmpy from anaconda.
A:
import numpy as np
arr=np.array([19,28,48])
|
Importing Numpy into Sublime Text 3
|
I'm new to coding and I have been learning it on Jupyter. I have anaconda, Sublime Text 3, and the numpy package installed on my Mac.
On Jupyter, we would import numpy by simply typing
import numpy as np
However, this doesnt seem to work on Sublime as I get the error ModuleNotFoundError: No module named 'numpy'
I would appreciate it if someone could guide me on how to get this working. Thanks!
|
[
"If you have Annaconda, install Spyder. \nIf you continue to have this problem, you could check all the lib install from anaconda.\nI suggest you to install nmpy from anaconda.\n",
"import numpy as np\narr=np.array([19,28,48])\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"numpy",
"python",
"sublimetext3"
] |
stackoverflow_0050929439_numpy_python_sublimetext3.txt
|
Q:
How to use transaction with "async" functions in Django?
When async def call_test(request): called async def test(): as shown below (I use Django==3.1.7):
async def test():
for _ in range(0, 3):
print("Test")
async def call_test(request):
await test() # Here
return HttpResponse("Call_test")
There was no error displaying the proper result below on console:
Test
Test
Test
But, when I put @transaction.atomic() on async def test(): as shown below:
@transaction.atomic # Here
async def test():
for _ in range(0, 3):
print("Test")
# ...
The error below occurred:
django.core.exceptions.SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async.
So, I put @sync_to_async under @transaction.atomic() as shown below:
@transaction.atomic
@sync_to_async # Here
async def test():
for _ in range(0, 3):
print("Test")
# ...
But, the same error below occurred:
django.core.exceptions.SynchronousOnlyOperation: You cannot call this
from an async context - use a thread or sync_to_async.
So, I put @sync_to_async on @transaction.atomic() as shown below:
@sync_to_async # Here
@transaction.atomic
async def test():
for _ in range(0, 3):
print("Test")
# ...
But, other error below occurred:
RuntimeWarning: coroutine 'test' was never awaited handle = None #
Needed to break cycles when an exception occurs. RuntimeWarning:
Enable tracemalloc to get the object allocation traceback
So, how can I use transaction with async functions in Django?
A:
I found the documentation of Django 4.1 says below:
Transactions do not yet work in async mode. If you have a piece of code that needs transactions behavior, we recommend you write that piece as a single synchronous function and call it using sync_to_async().
So, @transaction.atomic() cannot be used with async functions in the order version Django 3.1.7 as well.
|
How to use transaction with "async" functions in Django?
|
When async def call_test(request): called async def test(): as shown below (I use Django==3.1.7):
async def test():
for _ in range(0, 3):
print("Test")
async def call_test(request):
await test() # Here
return HttpResponse("Call_test")
There was no error displaying the proper result below on console:
Test
Test
Test
But, when I put @transaction.atomic() on async def test(): as shown below:
@transaction.atomic # Here
async def test():
for _ in range(0, 3):
print("Test")
# ...
The error below occurred:
django.core.exceptions.SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async.
So, I put @sync_to_async under @transaction.atomic() as shown below:
@transaction.atomic
@sync_to_async # Here
async def test():
for _ in range(0, 3):
print("Test")
# ...
But, the same error below occurred:
django.core.exceptions.SynchronousOnlyOperation: You cannot call this
from an async context - use a thread or sync_to_async.
So, I put @sync_to_async on @transaction.atomic() as shown below:
@sync_to_async # Here
@transaction.atomic
async def test():
for _ in range(0, 3):
print("Test")
# ...
But, other error below occurred:
RuntimeWarning: coroutine 'test' was never awaited handle = None #
Needed to break cycles when an exception occurs. RuntimeWarning:
Enable tracemalloc to get the object allocation traceback
So, how can I use transaction with async functions in Django?
|
[
"I found the documentation of Django 4.1 says below:\n\nTransactions do not yet work in async mode. If you have a piece of code that needs transactions behavior, we recommend you write that piece as a single synchronous function and call it using sync_to_async().\n\nSo, @transaction.atomic() cannot be used with async functions in the order version Django 3.1.7 as well.\n"
] |
[
0
] |
[] |
[] |
[
"asynchronous",
"django",
"python",
"python_3.x",
"transactions"
] |
stackoverflow_0074575922_asynchronous_django_python_python_3.x_transactions.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.