Search Results

Search found 32757 results on 1311 pages for 'database cursor'.

Page 503/1311 | < Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >

  • How can I request local pages in the background of an ASP.NET MVC app?

    - by flipdoubt
    My ASP.NET MVC app needs to run a set of tasks at startup and in the background at a regular interval. I have implemented each task as a controller action and listed the app-relative path to the action in the database. I implemented a TaskRunner process that gets the urls from the database and requests each one at a regular interval using WebRequest.Create, but this throws a UriFormatException. I cannot use this answer or any code that plucks values from HttpContext.Current.Request without getting an HttpException with the message "Request is not available in this context". The Request object is not available because my code uses System.Threading.Timer to do background processing, as recommended here. Here are my questions: Is there really no way to make local web requests within an ASP.NET web app? Is there really no way to dynamically ascertain the root path to the web app even using static dependencies in ASP.NET? I was trying to avoid storing the app's root path in the database (as FogBugz does with its "Maintenance Path"), but is this best option?

    Read the article

  • SaaS Multi-tenancy Applications: How is data import/export/backup being implemented?

    - by Mark Redman
    How are applications providing import / export (or backups) of data in SaaS based multi-tenancy applications, particularly single database designs? Imports: Keeping things simple I think basic imports are useful, ie CSV to a spec (or a way of providing a mapping between CSV columns and fields in the database. Exports: In single database designs I have seen XML exports and HTML (basic sitse generated) exports of data? I would assume that XML is a better option? How does one cater for relational data? Would you reference various things within XML and provide documentation of the relationships or let users figurethis out? Are vendors providing an export/backup that can be imported back in/restored? Your comments appreciated.

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • php search and replace

    - by Dave
    I am trying to create a database field merge into a document (rtf) using php i.e if I have a document that starts Dear Sir, Customer Name: [customer_name], Date of order: [order_date] After retrieving the appropriate database record I can use a simple search and replace to insert the database field into the right place. So far so good. I would however like to have a little more control over the data before it is replaced. For example I may wish to Title Case it, or convert a delimited string into a list with carriage returns. I would therefore like to be able to add extra formatting commands to the field to be replaced. e.g. Dear Sir, Customer Name: [customer_name, TC], Date of order: [order_date, Y/M/D] There may be more than one formatting command per field. Is there a way that I can now search for these strings? The format of the strings is not set in stone, so if I have to change the format then I can. Any suggestions appreciated.

    Read the article

  • How to open db.sqlite in Aloha Editor?

    - by Mariusz Poplawski
    I'm using Aloha Editor and I'm able to save/edit content without any problem. Editor saves content in SQLite database (db.sqlite). I know where the file is and I see that's getting bigger while I'm adding more text to it. But when I transfer that fille using filezilla to local computer and I open in notepad I see only: ** This file contains an SQLite 2.1 database ** I've tried to use few programs but it always says that database that unable to open. The programs I've tried: Sqliteman-1.2.2 and sqlitebrowser_200_b1_win.

    Read the article

  • How should images be stored when multiple sizes are needed?

    - by Josh Curren
    What is the best way to store images? Currently when an image is uploaded I resize it to 3 different sizes (a thumbnail, a normal size, and a large size). I save in a database a description of the image, the format, and use the id number from the database as the image name. Each size image has its own directory. Should I be storing the images in the database? Should I only be storing the larger size and generate the thumbnail as needed? Or any other ideas you have?

    Read the article

  • Encryption-Decryption in Rails

    - by Salil
    Hi All, I am using require 'digest/sha1' to encrypt my password and save into database. During login I authenticate by matching the encrypted password saved in database and again encrypted the one use enter in password field. As of now everything works fine but now I want to do 'Forgot Password' functionality. To do this I need to decrypt the password which is saved in database to find original one. How to decrypt using digest/sha1? Or does anyone know any algorithm which supports encryption & decryption as well? I am using ruby on rails so I need Ruby way to accomplish it.

    Read the article

  • Edit PDF online and save and form data to server

    - by Clowerweb
    Hello, I have some PDF documents which are being displayed in the browser, with some fields already being pre-populated from the database using iTextSharp (we are running Windows Server 2008, IIS 7, SQL Server 2008, and ASP.NET 2.0/2.5 with C#). Our clients need to be able to fill in the remaining fields and save the PDF to the server. I have considered the following possibilities: 1.) Somehow using iTextSharp to parse the form fields, grab all the form data and save it to the database on submit. 2.) Adding a submit button to the PDF itself using LiveCycle with some sort of JS click event to save the FDF/XFDF/XDP/XML data either to the database or to a flat file on the server. I am currently unsure as to what the best approach would be, what would work, or how to implement any of these possible solutions, so any help would be greatly appreciated. Thanks!

    Read the article

  • Crystal Reports, alignment when printing

    - by andySF
    Hello, i have a report with 2 objects. a text object from left to right and a database field on top of this object. When I load the report in viewer(or print from viewer) it looks OK but when I print the report programmatically with ReportDocument.PrintToPrinter() the database field moves to to the left and as a result it print on the text object. Concatenate the text and database field is not an option. The margins are the same in viewer and before to print programmatically. in viewer: http://promagic.hopto.org/screens/screen_2010-5-13_10_24_7-531.png programmatically: http://promagic.hopto.org/screens/screen_2010-5-13_10_25_55-187.png (the bold text is from db) Can anyone help me? Thanks!

    Read the article

  • Do I have to use Stored Procedures to get query level security or can I still do this with Dynamic S

    - by Peter Smith
    I'm developing an application where I'm concerned about locking down access to the database. I know I can develop stored procedures (and with proper parameter checking) limit a database user to an exact set of queries to execute. It's imperative that no other queries other then the ones I created in the stored procedures be allowed to execute under that user. Ideally even if a hacker gained access to the database connection (which only accepts connections from certain computers) they would only be able to execute the predefined stored procedures. Must I choose stored procedures for this or can I use Dynamic Sql with these fine grain permissions?

    Read the article

  • Swapping out web services

    - by zachary
    I created a gui in .net that I want other people to use. It connects to my custom database via a web service and returns data. Now I want other people to use it. They tell me that they want to use their own database. How can I let them plug their database results into my gui? It is almost as though I want to repoint to their web service somehow.... My gui is in .net but they could be using any language even java

    Read the article

  • Service Broker error message: Dialog security is unavailable for this conversation because there is

    - by yanigisawa
    I am getting this error in my sys.transmission_queue table whenever I attempt to send a SQL Service Broker message between two different SQL Server servers. (i.e. the databases are on two different physical machines) Dialog security is unavailable for this conversation because there is no security certificate bound to the database principal (Id: 5). Either create a certificate for the principal, or specify ENCRYPTION = OFF when beginning the conversation When this error refers to "database principal" what is it referring to? (the "master" database? dbo user?) I've used the CREATE CERTIFICATE command, backed up the certificate and created a same named certificate on the other server with the backup .cer file from the first server, but I keep getting this message. Any help would be appreciated in getting me pointed in the right direction. I must be missing something obvious. FYI, in my development environment, both the initiating and target databases were on the same physical server, and same SQL instance, and everything was working fine.

    Read the article

  • Login fails after upgrade to ASP.net 4.0 from 3.5

    - by lomac
    I cannot log in using any of the membership accounts using .net 4.0 version of the app. It fails like it's the wrong password, and FailedPasswordAttemptCount is incremented in my_aspnet_membership table. (I am using membership with mysql membership provider.) I can create new users. They appear in the database. But I cannot log in using the new user credentials (yes, IsApproved is 1). One clue is that the hashed passwords in the database is longer for the users created using the asp.net 4.0 version, e.g 3lwRden4e4Cm+cWVY/spa8oC3XGiKyQ2UWs5fxQ5l7g=, and the old .net 3.5 ones are all like +JQf1EcttK+3fZiFpbBANKVa92c=. I can still log in when connecting to the same db with the .net 3.5 version, but only to the old accounts, not the new ones created with the .net 4.0 version. The 4.0 version cannot log in to any accounts. I tried dropping the whole database on my test system, the membership tables are then auto created on first run, but it's still the same, can create users, but can't log in.

    Read the article

  • how can user_DEPENDENCIES read from the procedure

    - by Moudiz
    If I run this query : SELECT DISTINCT U.REFERENCED_NAME, U.REFERENCED_TYPE FROM USER_DEPENDENCIES U where U.name IN('P_CREATE_T') It will give me : U.REFERENCED_NAME | U.REFERENCED_TYPE random_name_table | table If I drop this table random_name_table : drop table random_name_table and I run the dependecie query It will give me this: U.REFERENCED_NAME | U.REFERENCED_TYPE BIN$6WfJh8MWWGngQ3ATqMDOpQ==$0 | table I know the result is related to recycle bin, But what I am asking is there a way that shows the table even if its droped ? I mean shouldnt the depency query read from the procedure and not from the database ? If not is there a query that reads from the procedure and not from database ? Edit ok I will make it clear : my question USER_DEPENDENCIES read from the procedure or the database ? My second question does the recycle bin always shows ? I mean is there times where the result of the recylebin disapear ?

    Read the article

  • How to name uploaded files in php to prevent them from being overwritten?

    - by user156814
    I'm trying to add user submitted articles to my website, (only for admins). With each article comes an option to upload up to 3 images. My database is set up like this Articles id user_id title body date_added last_edited Photos id (auto_increment) article_id First I save the article in the database, then I upload the photo (temporarily) then I create a new photo record in the database saving the article_id. Then I rename the uploaded photo to be the same as the primary key of the photo record, and to be a png. $filename = $photo->id . '.png'; I figured this would be a good way to prevent files form being overwritten. This seems flawed to me. Any suggestions on how I should save my records and photos? Thanks

    Read the article

  • SQL join from multiple tables

    - by Kenny Anderson
    Hi all We've got a system (MS SQL 2008 R2-based) that has a number of "input" database and a one "output" database. I'd like to write a query that will read from the output DB, and JOIN it to data in one of the source DB. However, the source table may be one or more individual tables :( The name of the source DB is included in the output DB; ideally, I'd like to do something like the following (pseudo-SQL ahoy) SELECT output.UID, output.description, input.data from output.dbo.description LEFT JOIN (SELECT input.UID, input.data FROM [output.sourcedb].dbo.datatable ) AS input ON input.UID=output.UID Is there any way to do something like the above - "dynamically" specify the database and table to be joined on for each row in the query?

    Read the article

  • How can i insert large files in mysql db using php?

    - by anjan
    Hi! I want to upload a large file of size 10M max to my mysql database. Using .htaccess i changed the PHP's own file upload limit to "10485760" = 10M, i am able to upload files upto 10M size without any problem. But i can not insert the file in database if it is more that 1M in size. i am using file_get_contents to read all file data and pass it to the insert query as a string to be inserted into a LONGBLOB field. But files with more than 1M size is not being added to database, though i can use print_r($_FILES) to examine that the file uploaded correctly. Any help will be appreciated and i will need it within next 6 hours. So, please help! best regards, Anjan

    Read the article

  • What about the Sql transaction log

    - by Michel
    Hi, i always thought that the sql transaction log keeps track of all the transactions done in the database so it could help recovering the database file in case of a unexpected power down or something like that So then, in normal usage, when the data is committed and written to disk, it is cleared because all the data is nice and safe in the mdf file. Seeing the ldf file grow and reading some i understand that that is not the case, and it will keep growing, until: you shrink the log. Only at that point all the commited transactions are cleared and the log file is shrinked. I found some sp's who should do this, but also found the theory that you first have to backup the database? That last step doesn't make sense to me, so can anyone tell me of that is correct and if so, why that is?

    Read the article

  • Check if a SQL table exists.

    - by Carra
    What's the best way to check if a table exists in a Sql database in a database independant way? I came up with: bool exists; const string sqlStatement = @"SELECT COUNT(*) FROM my_table"; try { using (OdbcCommand cmd = new OdbcCommand(sqlStatement, myOdbcConnection)) { cmd.ExecuteScalar(); exists = true; } } catch (Exception ex) { exists = false; } Is there a better way to do this? This method will not work when the connection to the database fails. I've found ways for Sybase, SQL server, Oracle but nothing that works for all databases.

    Read the article

  • Audio organizing via CLI

    - by Radek Šimko
    I'm looking for some software for my OpenSUSE, which with I would be able to organize my audio files. I've found one, which may be good, but it's unable to run without X server (in CLI). http://musicbrainz.org/doc/MusicBrainz_Picard I'm not looking for ID3 renamers. There're maybe hundreds of them... I'm looking for software, which has its own database, or is able to communicate with some database, like CDDB, Gracenote, last.fm etc.

    Read the article

  • cPanel Redundacy

    - by bogha
    hi, what information should i know to do the following: we will have 2 servers, each one will have a cPanel WHM installed and 2 will have DNS, we want to insure that redundancy is achieved on both servers, means the servers should work as active/standby unit. 2 cPanel active/standby 2 DNS active/standby also does this have any impact on the mySQL database, do we have to buy another server to use it as the mySQL database or it's possible to sync all the cPanel information from one server to another. thank you.

    Read the article

  • ASP.NET profile deletion

    - by Ben Aston
    We use the ASP.NET profile subsystem to associate key-value pairs of information with users. I am implementing functionality for the deletion of users. I have been using the ProfileManager.DeleteProfile(string userName) method, but I have noticed that this leaves a row in the aspnet_Users table in the aspnetdb database we use to store the profile information. Clearly, I could manipulate the database directly, but I am reluctant to do so. What is the best way to ensure all information assocaited with a user is deleted from the aspnetdb? PS: there is a tantalising Membership.DeleteUser(string userName, bool deleteEverything) method, but this returns a database connection error. A web.config issue?

    Read the article

  • Disabling Linux mouse middle button

    - by syrenity
    Hi. In Linux by default the middle mouse button (i.e. wheel) copies the selected text into the place of cursor. This causes accidental pasting while I'm trying to scroll code / config files via the mouse - especially in Eclipse. Any idea how to disable it? Thanks.

    Read the article

  • How to run an .exe application in another computer?

    - by ADAM
    I am working on a C# application in Visual Studio 2013. When I run the .exe file from my computer, the application runs very well and all the features work. When I tried to run the .exe on another computer, the database side doesn't work well and the connection with the database couldn't be opened. The SqlConnection is constructed as follows: SqlConnection cn = new SqlConnection("Data Source=ADAM-PC;Initial Catalog=integrationdatabase;Integrated Security=True" I don't know how to change the data source to make the connection with the database established in another computer. How can I solve this problem?

    Read the article

< Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >