Search Results

Search found 10028 results on 402 pages for 'berkeley db'.

Page 172/402 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Python: eliminating stack traces into library code?

    - by Mark Harrison
    When I get a runtime exception from the standard library, it's almost always a problem in my code and not in the library code. Is there a way to truncate the exception stack trace so that it doesn't show the guts of the library package? For example, I would like to get this: Traceback (most recent call last): File "./lmd3-mkhead.py", line 71, in <module> main() File "./lmd3-mkhead.py", line 66, in main create() File "./lmd3-mkhead.py", line 41, in create headver1[depotFile]=rev TypeError: Data values must be of type string or None. and not this: Traceback (most recent call last): File "./lmd3-mkhead.py", line 71, in <module> main() File "./lmd3-mkhead.py", line 66, in main create() File "./lmd3-mkhead.py", line 41, in create headver1[depotFile]=rev File "/usr/anim/modsquad/oses/fc11/lib/python2.6/bsddb/__init__.py", line 276, in __setitem__ _DeadlockWrap(wrapF) # self.db[key] = value File "/usr/anim/modsquad/oses/fc11/lib/python2.6/bsddb/dbutils.py", line 68, in DeadlockWrap return function(*_args, **_kwargs) File "/usr/anim/modsquad/oses/fc11/lib/python2.6/bsddb/__init__.py", line 275, in wrapF self.db[key] = value TypeError: Data values must be of type string or None.

    Read the article

  • Cleaning up PHP Code

    - by Michael
    Hi, I've noticed I am a very sloppy coder and do things out of the ordinary. Can you take a look at my code and give me some tips on how to code more efficiently? What can I do to improve? session_start(); /check if the token is correct/ if ($_SESSION['token'] == $_GET['custom1']){ /*connect to db*/ mysql_connect('localhost','x','x') or die(mysql_error()); mysql_select_db('x'); /*get data*/ $orderid = mysql_real_escape_string($_GET['order_id']); $amount = mysql_real_escape_string($_GET['amount']); $product = mysql_real_escape_string($_GET['product1Name']); $cc = mysql_real_escape_string($_GET['Credit_Card_Number']); $length = strlen($cc); $last = 4; $start = $length - $last; $last4 = substr($cc, $start, $last); $ipaddress = mysql_real_escape_string($_GET['ipAddress']); $accountid = $_SESSION['user_id']; $credits = mysql_real_escape_string($_GET['custom3']); /*insert history into db*/ mysql_query("INSERT into billinghistory (orderid, price, description, credits, last4, orderip, accountid) VALUES ('$orderid', '$amount', '$product', '$credits', '$last4', '$ipaddress', '$accountid')"); /*add the credits to the users account*/ mysql_query("UPDATE accounts SET credits = credits + $credits WHERE user_id = '$accountid'"); /*redirect is successful*/ header("location: index.php?x=1"); }else{ /*something messed up*/ header("location: error.php"); }

    Read the article

  • Wicket application + Apache + mod_jk - AJP queues are filling up!

    - by nojyarg
    Dear community, We are having a Wicket-based Java application deployed in a production server cluster using Apache (2.2.3) with mod_jk (1.2.30) as load balancing component w/ sticky session and Jboss 5 as application container for the Java application. We are inconsistently seeing an issue in our production environment where our AJP queues between Apache and Jboss as shown in the JMX console fill up with requests to the point where the application server is no longer taking on any new requests. When looking at all involved system components (overall traffic, load db, process list db, load of all clustered application server nodes) nothing points towards a capacity issue which would explain why the calls are being stalled in the AJP queue. Instead all systems appear sufficiently idle. So far, our only remedy to this issue is to restart the appservers and the load balancer which only occasionally clears the AJP queues. We are trying to figure out why the queues are filling up to the point that no calls get returned to the end user although the system is not under a high load. Has anyone else experienced similar problems? Are there any other system metrics we should monitor that could explain the queuing behavior? Is this potentially a mod_jk issue? If so, is it advisable to swap mod_jk with mod_cluster to resolve the issue? Any advice is highly appreciated. If I can provide additional information for the sake of troubleshooting I would be more than willing to do so. /Ben

    Read the article

  • get and set for class in model - MVC 2 asp.net

    - by bergin
    Hi there, I want to improve the program so it has a proper constructor but also works with the models environment of MVC. I currently have: public void recordDocument(int order_id, string filename, string physical_path, string slug, int bytes) { ArchiveDocument doc = new ArchiveDocument(); doc.order_id = order_id; doc.filename = filename; doc.physical_path = physical_path; doc.slug = slug; doc.bytes = bytes; db.ArchiveDocuments.InsertOnSubmit(doc); } This obviously should be a constructor and should change to the leaner: public void recordDocument(ArchiveDocument doc) { db.ArchiveDocuments.InsertOnSubmit(doc); } with a get & set somewhere else - not sure of the syntax - do I create a partial class? so: creating in the somewhere repository - ArchiveDocument doc = new ArchiveDocument(order_id, idTaggedFilename, physical_path, slug, bytes); and then: namespace ordering.Models { public partial class ArchiveDocument { int order_id, string filename, string physical_path, string slug, int bytes; public archiveDocument(int order_id, string filename, string physical_path, string slug, int bytes){ this.order_id = order_id; etc } } How should I alter the code?

    Read the article

  • Best method to search hierarchical data

    - by WDuffy
    I'm looking at building a facility which allows querying for data with hierarchical filtering. I have a few ideas how I'm going to go about it but was wondering if there are any recommendations or suggestions that might be more efficient. As an example imagine that a user is searching for a job. The job areas would be as follows. 1: Scotland 2: --- West Central 3: ------ Glasgow 4: ------ Etc 5: --- North East 6: ------ Ayrshire 7: ------ Etc A user can search specific (i.e. Glasgow) or in a larger area (i.e. Scotland). The two approaches I am considering are: keep a note of children in the database for each record (i.e. cat 1 would have 2, 3, 4 in its children field) and query against that record with a SELECT * FROM Jobs WHERE Category IN Areas.childrenField. Use a recursive function to find all results who have a relation to the selected area. The problems I see from both are: Holding this data in the db will mean having to keep track of all changes to structure. Recursion is slow and inefficent. Any ideas, suggestion or recommendations on the best approach? I'm using C# ASP.NET with MSSQL 2005 DB.

    Read the article

  • WPF & Linq To SQL binding ComboBox to foreign key

    - by ZeroDelta
    I'm having trouble binding a ComboBox to a foreign key in WPF using Linq To SQL. It works fine when displaying records, but if I change the selection on the ComboBox, that change does not seem to affect the property to which it is bound. My SQL Server Compact file has three tables: Players (PK is PlayerID), Events (PK is EventID), and Matches (PK is MatchID). Matches has FKs for the the other two, so that a match is associated with a player and an event. My window for editing a match uses a ComboBox to select the Event, and the ItemsSource is set to the result of a LINQ query to pull all of the Events. And of course the user should be able to select the Event based on EventName, not EventID. Here's the XAML: <ComboBox x:Name="cboEvent" DisplayMemberPath="EventName" SelectedValuePath="EventID" SelectedValue="{Binding Path=EventID, UpdateSourceTrigger=PropertyChanged}" /> And some code-behind from the Loaded event handler: var evt = from ev in db.Events orderby ev.EventName select ev; cboEvent.ItemsSource = evt.ToList(); var mtch = from m in db.Matches where m.PlayerID == ((Player)playerView.CurrentItem).PlayerID select m; matchView = (CollectionView)CollectionViewSource.GetDefaultView(mtch); this.DataContext = matchView; When displaying matches, this works fine--I can navigate from one match to the next and the EventName is shown correctly. However, if I select a new Event via this ComboBox, the CurrentItem of the CollectionView doesn't seem to change. I feel like I'm missing something stupid! Note: the Player is selected via a ListBox, and that selection filters the matches displayed--this seems to be working fine, so I didn't include that code. That is the reason for the "PlayerID" reference in the LINQ query

    Read the article

  • Django tutorial says I haven't set DATABASE_ENGINE setting yet... but I have

    - by Joe
    I'm working through the Django tutorial and receiving the following error when I run the initial python manage.py syncdb: Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 362 in execute_manager utility.execute() File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 222, in execute output = self.handle(*args, **options) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 351, in handle return self.handle_noargs(**options) File "/Library/Python/2.6/site-packages/django/core/management/commands/syncdb.py", line 49, in handle_noargs cursor = connection.cursor() File "/Library/Python/2.6/site-packages/django/db/backends/dummy/base.py", line 15, in complain raise ImproperlyConfigured, "You haven't set the DATABASE_ENGINE setting yet." django.core.exceptions.ImproperlyConfigured: You haven't set the DATABASE_ENGINE setting yet. My settings.py looks like: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', # Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'dj_tut', # Or path to database file if using sqlite3. 'USER': '', # Not used with sqlite3. 'PASSWORD': '', # Not used with sqlite3. 'HOST': '', # Set to empty string for localhost. Not used with sqlite3. 'PORT': '', # Set to empty string for default. Not used with sqlite3. } } I'm guessing this is something simple, but why isn't it seeing the ENGINE setting?

    Read the article

  • How can I debug PEAR auth?

    - by croceldon
    I have a directory on my site that I've implemented PEAR's Auth to run my authentication. It is working great. However, I've tried to make a copy of my site (it's going to be translated to a different language), and on this new site, the Auth process doesn't seem to be working correctly. I can login properly, but every time I try to go to a different page in the same directory, and use Auth to authorize, it forces me to login again. Here's my logic: $auth_options = array( 'dsn' => mysql://user:password@server/db', 'table' => 'users', 'usernamecol' => 'username', 'passwordcol' => 'password', 'db_fields' => '*' ); $auth = new Auth("DB", $auth_options, "login_function"); $auth->setFailedLoginCallback('bad_login'); $auth->start(); if (!$auth->checkAuth()) { die('cannot succeed in checkAuth') exit; } else { include("nocache.php"); } This is part of a file that's included in every php page I that I desire to require authentication. I can login properly once, but whenever I then try to go to a different page that requires authentication, it makes me login again (and I see the 'cannot succeed' die message at the bottom of the page). Again, this solution works fine on my original site, I copied all the files, and only changed the db server/password - it still doesn't work. And I'm using the same webhost for both. What am I doing wrong here? Or how can I debug this further?

    Read the article

  • WPF ObservableCollection in xaml

    - by Cloverness
    Hi, I have created an ObservableCollection in the code behind of a user control. It is created when the window loads: private void UserControl_Loaded(object sender, RoutedEventArgs e) { Entities db = new Entities(); ObservableCollection<Image> _imageCollection = new ObservableCollection<Image>(); IEnumerable<library> libraryQuery = from c in db.ElectricalLibraries select c; foreach (ElectricalLibrary c in libraryQuery) { Image finalImage = new Image(); finalImage.Width = 80; BitmapImage logo = new BitmapImage(); logo.BeginInit(); logo.UriSource = new Uri(c.url); logo.EndInit(); finalImage.Source = logo; _imageCollection.Add(finalImage); } } I need to get the ObservableCollection of images which are created based on the url saved in a database. But I need a ListView or other ItemsControl to bind to it in XAML file like this: But I can't figure it out how to pass the ObservableCollection to the ItemsSource of that control. I tried to create a class and then create an instance of a class in xaml file but it did not work. Should I create a static resource somehow Any help will be greatly appreciated.

    Read the article

  • NHibernate will not insert a record

    - by Brian Beckham
    I have an application that is now 4+ years old that is exhibiting some odd behavior on our latest deployment. The application uses nHibernate for all inserts / updates / selects, etc. We are currently using .NET 2.0, and nHibernate 1.2 (I know, we need to upgrade) This deployment is on Windows 2008 Server x64, IIS 7.5 - what I have seen so far is that the application runs, but is unable to insert or update records in the DB - reads seem fine so far, but writes are a problem. SOME writes actually work, inserts into some small tables, but most never even make it to the DB. Using SQL Profiler, the insert / updates never make it to the server, and turning log4net up to DEBUG, and show_sql true - the select statements appear, but the insert / update statements never make it into the log at all, and never show up at the server. What's even more odd is that the application seems to be oblivious to this - the commandandclose runs without exception (open session in view with an httpmodule), the domain objects come back with uuid's generated, etc. but never get persisted. Certainly an upgrade is due, but I would hate to try it during a deployment, and without time to accurately test the app. Any ideas?

    Read the article

  • codeigniter active record and mysql

    - by sea_1987
    I am running a query with Active Record in a modal of my codeigniter application, the query looks like this, public function selectAllJobs() { $this->db->select('*') ->from('job_listing') ->join('job_listing_has_employer_details', 'job_listing_has_employer_details.employer_details_id = job_listing.id', 'left'); //->join('employer_details', 'employer_details.users_id = job_listing_has_employer_details.employer_details_id'); $query = $this->db->get(); return $query->result_array(); } This returns an array that looks like this, [0]=> array(13) { ["id"]=> string(1) "1" ["job_titles_id"]=> string(1) "1" ["location"]=> string(12) "Huddersfield" ["location_postcode"]=> string(7) "HD3 4AG" ["basic_salary"]=> string(19) "£20,000 - £25,000" ["bonus"]=> string(12) "php, html, j" ["benefits"]=> string(11) "Compnay Car" ["key_skills"]=> string(1) "1" ["retrain_position"]=> string(3) "YES" ["summary"]=> string(73) "Lorem Ipsum is simply dummy text of the printing and typesetting industry" ["description"]=> string(73) "Lorem Ipsum is simply dummy text of the printing and typesetting industry" ["job_listing_id"]=> NULL ["employer_details_id"]=> NULL } } The job_listing_id and employer_details_id return as NULL however if I run the SQL in phpmyadmin I get full set of results, the query i running in phpmyadmin is, SELECT * FROM ( `job_listing` ) LEFT JOIN `job_listing_has_employer_details` ON `job_listing_has_employer_details`.`employer_details_id` LIMIT 0 , 30 Is there a reason why I am getting differing results?

    Read the article

  • Updating database row from model

    - by Jamie Dixon
    Hey everyone, I'm haing a few problems updating a row in my database using Linq2Sql. Inside of my model I have two methods for updating and saving from my controller, which in turn receives an updated model from my view. My model methods like like: public void Update(Activity activity) { _db.Activities.InsertOnSubmit(activity); } public void Save() { _db.SubmitChanges(); } and the code in my Controller likes like: [HttpPost] public ActionResult Edit(Activity activity) { if (ModelState.IsValid) { UpdateModel<Activity>(activity); _activitiesModel.Update(activity); _activitiesModel.Save(); } return View(activity); } The problem I'm having is that this code inserts a new entry into the database, even though the model item i'm inserting-on-submit contains a primary key field. I've also tried re-attaching the model object back to the data source but this throws an error because the item already exists. Any pointers in the right direction will be greatly appreciated. UPDATE: I'm using dependancy injection to instantiate my datacontext object as follows: IMyDataContext _db; public ActivitiesModel(IMyDataContext db) { _db = db; }

    Read the article

  • Having a problem inserting into database

    - by neo skosana
    I have a stored procedure: CREATE PROCEDURE busi_reg(IN instruc VARCHAR(10), IN tble VARCHAR(20), IN busName VARCHAR(50), IN busCateg VARCHAR(100), IN busType VARCHAR(50), IN phne VARCHAR(20), IN addrs VARCHAR(200), IN cty VARCHAR(50), IN prvnce VARCHAR(50), IN pstCde VARCHAR(10), IN nm VARCHAR(20), IN surname VARCHAR(20), IN eml VARCHAR(50), IN pswd VARCHAR(20), IN srce VARCHAR(50), IN refr VARCHAR(50), IN sess VARCHAR(50)) BEGIN INSERT INTO b_register SET business_name = busName, business_category = busCateg, business_type = busType, phone = phne, address = addrs, city = cty, province = prvnce, postal_code = pstCde, firstname = nm, lastname = surname, email = eml, password = pswd, source = srce, ref_no = refr; END; This is my php script: $busName = $_POST['bus_name']; $busCateg = $_POST['bus_cat']; $busType = $_POST['bus_type']; $phne = $_POST['phone']; $addrs = $_POST['address']; $cty = $_POST['city']; $prvnce = $_POST['province']; $pstCde = $_POST['postCode']; $nm = $_POST['name']; $surname = $_POST['lname']; $eml = $_POST['email']; $srce = $_POST['source']; $ref = $_POST['ref_no']; $result2 = $db->query("CALL busi_reg('$instruc', '$tble', '$busName', '$busCateg', '$busType', '$phne', '$addrs', '$cty', '$prvnce', '$pstCde', '$nm', '$surname', '$eml', '$pswd', '$srce', '$refr', '')"); if($result) { echo "Data has been saved"; } else { printf("Error: %s\n",$db->error); } Now the error that I get: Commands out of sync; you can't run this command now

    Read the article

  • SQL problem - select accross multiple tables (user groups)

    - by morpheous
    I have a db schema which looks something like this: create table user (id int, name varchar(32)); create table group (id int, name varchar(32)); create table group_member (foobar_id int, user_id int, flag int); I want to write a query that allows me to so the following: Given a valid user id (UID), fetch the ids of all users that are in the same group as the specified user id (UID) AND have group_member.flag=3. Rather than just have the SQL. I want to learn how to think like a Db programmer. As a coder, SQL is my weakest link (since I am far more comfortable with imperative languages than declarative ones) - but I want to change that. Anyway here are the steps I have identified as necessary to break down the task. I would be grateful if some SQL guru can demonstrate the simple SQL statements - i.e. atomic SQL statements, one for each of the identified subtasks below, and then finally, how I can combine those statements to make the ONE statement that implements the required functionality. Here goes (assume specified user_id [UID] = 1): //Subtask #1. Fetch list of all groups of which I am a member Select group.id from user inner join group_member where user.id=group_member.user_id and user.id=1 //Subtask #2 Fetch a list of all members who are members of the groups I am a member of (i.e. groups in subtask #1) Not sure about this ... select user.id from user, group_member gm1, group_member gm2, ... [Stuck] //Subtask #3 Get list of users that satisfy criteria group_member.flag=3 Select user.id from user inner join group_member where user.id=group_member.user_id and user.id=1 and group_member.flag=3 Once I have the SQL for subtask2, I'd then like to see how the complete SQL statement is built from these subtasks (you dont have to use the SQL in the subtask, it just a way of explaining the steps involved - also, my SQL may be incorrect/inefficient, if so, please feel free to correct it, and point out what was wrong with it). Thanks

    Read the article

  • Faster Insertion of Records into a Table with SQLAlchemy

    - by Kyle Brandt
    I am parsing a log and inserting it into either MySQL or SQLite using SQLAlchemy and Python. Right now I open a connection to the DB, and as I loop over each line, I insert it after it is parsed (This is just one big table right now, not very experienced with SQL). I then close the connection when the loop is done. The summarized code is: log_table = schema.Table('log_table', metadata, schema.Column('id', types.Integer, primary_key=True), schema.Column('time', types.DateTime), schema.Column('ip', types.String(length=15)) .... engine = create_engine(...) metadata.bind = engine connection = engine.connect() .... for line in file_to_parse: m = line_regex.match(line) if m: fields = m.groupdict() pythonified = pythoninfy_log(fields) #Turn them into ints, datatimes, etc if use_sql: ins = log_table.insert(values=pythonified) connection.execute(ins) parsed += 1 My two questions are: Is there a way to speed up the inserts within this basic framework? Maybe have a Queue of inserts and some insertion threads, some sort of bulk inserts, etc? When I used MySQL, for about ~1.2 million records the insert time was 15 minutes. With SQLite, the insert time was a little over an hour. Does that time difference between the db engines seem about right, or does it mean I am doing something very wrong?

    Read the article

  • Inefficient 'ANY' LINQ clause

    - by Focus
    I have a query that pulls back a user's "feed" which is essentially all of their activity. If the user is logged in the query will be filtered so that the feed not only includes all of the specified user's data, but also any of their friends. The database structure includes an Actions table that holds the user that created the action and a UserFriends table which holds any pairing of friends using a FrienderId and FriendeeId column which map to UserIds. I have set up my LINQ query and it works fine to pull back the data I want, however, I noticed that the query gets turned into X number of CASE clauses in profiler where X is the number of total Actions in the database. This will obviously be horrible when the database has a user base larger than just me and 3 test users. Here's the SQL query I'm trying to achieve: select * from [Action] a where a.UserId = 'GUID' OR a.UserId in (SELECT FriendeeId from UserFriends uf where uf.FrienderId = 'GUID') OR a.UserId in (SELECT FrienderId from UserFriends uf where uf.FriendeeId = 'GUID') This is what I currently have as my LINQ query. feed = feed.Where(o => o.User.UserKey == user.UserKey || db.Users.Any(u => u.UserFriends.Any(ufr => ufr.Friender.UserKey == user.UserKey && ufr.isApproved) || db.Users.Any(u2 => u2.UserFriends.Any(ufr => ufr.Friendee.UserKey == user.UserKey && ufr.isApproved) ))); This query creates this: http://pastebin.com/UQhT90wh That shows up X times in the profile trace, once for each Action in the table. What am I doing wrong? Is there any way to clean this up?

    Read the article

  • DOM manipulation

    - by bluedaniel
    Hello everyone, Im trying to use the DOM in PHP to do a pretty specific job and Ive got no luck so far, the objective is to take a string of HTML from a Wordpress blog post (from the DB, this is a wordpress plugin). And then out of that HTML replace <div id="do_not_edit">old content</div>" with <div id="do_not_edit">new content</div>" in its place. Saving anything above and below that div in its structure. Then save the HTML back into the DB, should be simple really, I have read that a regex wouldnt be the right way to go here so Ive turned to the DOM instead. The problem is I just cant get it to work, cant extract the div or anything. Help me!! UPDATE The HTML coming out of the wordpress table looks like: Congratulations on finding us here on the world wide web, we are on a mission to create a website that will show off your culinary skills better than any other website does. <div id="do_not_edit">blah blah</div> We want this website to be fun and easy to use, we strive for simple elegance and incredible functionality.We aim to provide a 'complete package'. By this we want to create a website where people can meet, share ideas and help each other out. After several different (incorrect) workings all Ive got below is: $content = ($wpdb->get_var( "SELECT `post_content` FROM $wpdb->posts WHERE ID = {$article[post_id]}" )); $doc = new DOMDocument(); $doc->validateOnParse = true; $doc->loadHTMLFile($content); $element = $doc->getElementById('do_not_edit'); echo $element;

    Read the article

  • Solr - DeltaImport doenst run the parentDeltaQuery

    - by rails
    I have 1:n relation between my main entity(PackageVersion) and its tag in my DB. I add a new tag with this date to the db at the timestamp and I run delta import command. the select retrieves the line but i dont see any other sql. Here are my data-config.xml configurations: <entity name="PackageVersion" pk="PackageVersionId" query= "select ... from [dbo].[Package] Package inner join [dbo].[PackageVersion] PackageVersion on Package.Id = PackageVersion.PackageId" deltaQuery = "select PackageVersion.Id PackageVersionId from [dbo].[Package] Package inner join [dbo].[PackageVersion] PackageVersion on Package.Id = PackageVersion.PackageId where Package.LastModificationTime > '${dataimporter.last_index_time}' OR PackageVersion.Timestamp > '${dataimporter.last_index_time}'" deltaImportQuery="select ... from [dbo].[Package] Package inner join [dbo].[PackageVersion] PackageVersion on Package.Id = PackageVersion.PackageId Where PackageVersionId=='${dih.delta.id}'" > <entity name="PackageTag" pk="ResourceId" processor="CachedSqlEntityProcessor" cacheKey="ResourceId" cacheLookup="PackageVersion.PackageId" query= "SELECT ResourceId,[Text] PackageTag from [dbo].[Tag] Tag" deltaQuery="SELECT ResourceId,[Text] PackageTag from [dbo].[Tag] Tag Where Tag.TimeStamp > '${dataimporter.last_index_time}'" parentDeltaQuery="select PackageVersion.PackageVersionId from [dbo].[Package] where Package.Id=${PackageTag.ResourceId}"> </entity> </entity>

    Read the article

  • Creating android app Database with big amount of data

    - by Thomas
    Hi all, The database of my application need to be filled with a lot of data, so during onCreate(), it's not only some create table sql instructions, there is a lot of inserts. The solution I chose is to store all this instructions in a sql file located in res/raw and which is loaded with Resources.openRawResource(id). It works well but I face to encoding issue, I have some accentuated caharacters in the sql file which appears bad in my application. This my code to do this : public String getFileContent(Resources resources, int rawId) throws IOException { InputStream is = resources.openRawResource(rawId); int size = is.available(); // Read the entire asset into a local byte buffer. byte[] buffer = new byte[size]; is.read(buffer); is.close(); // Convert the buffer into a string. return new String(buffer); } public void onCreate(SQLiteDatabase db) { try { // get file content String sqlCode = getFileContent(mCtx.getResources(), R.raw.db_create); // execute code for (String sqlStatements : sqlCode.split(";")) { db.execSQL(sqlStatements); } Log.v("Creating database done."); } catch (IOException e) { // Should never happen! Log.e("Error reading sql file " + e.getMessage(), e); throw new RuntimeException(e); } catch (SQLException e) { Log.e("Error executing sql code " + e.getMessage(), e); throw new RuntimeException(e); } The solution I found to avoid this is to load the sql instructions from a huge static final string instead of a file, and all accentutated characters appears well. But Isn't there a more elegant way to load sql instructions than a big static final String attribute with all sql instructions ? Thanks in advance Thomas

    Read the article

  • Password security; Is this safe?

    - by Camran
    I asked a question yesterday about password safety... I am new at security... I am using a mysql db, and need to store users passwords there. I have been told in answers that hashing and THEN saving the HASHED value of the password is the correct way of doing this. So basically I want to verify with you guys this is correct now. It is a classifieds website, and for each classified the user puts, he has to enter a password so that he/she can remove the classified using that password later on (when product is sold for example). In a file called "put_ad.php" I use the $_POST method to fetch the pass from a form. Then I hash it and put it into a mysql table. Then whenever the users wants to delete the ad, I check the entered password by hashing it and comparing the hashed value of the entered passw against the hashed value in the mysql db, right? BUT, what if I as an admin want to delete a classified, is there a method to "Unhash" the password easily? sha1 is used currently btw. some code is very much appreciated. Thanks

    Read the article

  • MD5CryptoServiceProvider ComputeHash Issues between VS 2003 and VS 2008

    - by owensoroke
    I have a database application that generates a MD5 hash and compares the hash value to a value in our DB (SQL 2K). The original application was written in Visual Studio 2003 and a deployed version has been working for years. Recently, some new machines on the .NET framework 3.5 have been having unrelated issues with our runtime. This has forced us to port our code path from Visual Studio 2003 to Visual Studio 2008. Since that time the hash produced by the code is different than the values in the database. The original call to the function posted in code is: RemoveInvalidPasswordCharactersFromHashedPassword(Text_Scrub(GenerateMD5Hash(strPSW))) I am looking for expert guidance as to whether or not the MD5 methods have changed since VS 2K3 (causing this point of failure), or where other possible problems may be originating from. I realize this may not be the best method to hash, but utimately any changes to the MD5 code would force us to change some 300 values in our DB table and would cost us a lot of time. In addition, I am trying to avoid having to redeploy all of the functioning versions of this application. I am more than happy to post other code including the RemoveInvalidPasswordCharactersFromHashedPassword function, or our Text_Scrub if it is necessary to recieve appropriate feedback. Thank you in advance for your input. Public Function GenerateMD5Hash(ByVal strInput As String) As String Dim md5Provider As MD5 ' generate bytes for the input string Dim inputData() As Byte = ASCIIEncoding.ASCII.GetBytes(strInput) ' compute MD5 hash md5Provider = New MD5CryptoServiceProvider Dim hashResult() As Byte = md5Provider.ComputeHash(inputData) Return ASCIIEncoding.ASCII.GetString(hashResult) End Function

    Read the article

  • java: how to parse html-like xml

    - by Yang
    I have an html-like xml, basically it is html. I need to get the elements in each . Each element looks like this: <line tid="744476117"> <attr>1414</attr> <attr>31</attr><attr class="thread_title">title1</attr><attr>author1</attr><attr>date1</attr></line> My code is as below, it does recognize that there are 50 in the file, but it gives me NULLPointException when parsing NodeList fstNmElmntLst = fstElmnt.getElementsByTagName("attr"); Any idea why this is happening? The same code has been used for other applications without problems. DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(cleanxml)); Document doc = db.parse(is); doc.getDocumentElement().normalize(); System.out.println("Root element " + doc.getDocumentElement().getNodeName()); NodeList nodeLst = doc.getElementsByTagName("line"); for (int s = 0; s < nodeLst.getLength(); s++) { System.out.println(nodeLst.getLength()); Node fstNode = nodeLst.item(s); if (fstNode.getNodeType() == Node.ELEMENT_NODE) { Element fstElmnt = (Element) fstNode; NodeList fstNmElmntLst = fstElmnt.getElementsByTagName("attr"); Element fstNmElmnt = (Element) fstNmElmntLst.item(0); NodeList fstNm = fstNmElmnt.getChildNodes(); System.out.println("attr : " + ((Node) fstNm.item(0)).getNodeValue()); } }

    Read the article

  • django multiprocess problem

    - by iKiR
    I have django application, running under lighttpd via fastcgi. FCGI running script looks like: python manage.py runfcgi socket=<path>/main.socket method=prefork \ pidfile=<path>/server.pid \ minspare=5 maxspare=10 maxchildren=10 maxrequests=500 \ I use SQLite. So I have 10 proccess, which all work with the same DB. Next I have 2 views: def view1(request) ... obj = MyModel.objects.get_or_create(id=1) obj.param1 = <some value> obj.save () def view2(request) ... obj = MyModel.objects.get_or_create(id=1) obj.param2 = <some value> obj.save () And If this views are executed in two different threads sometimes I get MyModel instance in DB with id=1 and updated either param1 or param2 (BUT not both) - it depends on which process was the first. (of course in real life id changes, but sometimes 2 processes execute these two views with same id) The question is: What should I do to get instance with updated param1 and param2? I need something for merging changes in different processes. One decision is create interprocess lock object but in this case I will get sequence executing views and they will not be able to be executed simultaneously, so I ask help

    Read the article

  • UIImage Object surprisingly returning null but not NSData

    - by riyaz
    i have created a sqlite db. and i have insert a few datas in my db.. UIImage * imagee=[UIImage imageNamed:@"image.png"]; NSData *mydata=[NSData dataWithData:UIImagePNGRepresentation(imagee)]; const char *dbpath = [databasePath UTF8String]; NSString *insertSQL=[NSString stringWithFormat:@"insert into CONTACTS values(\"%@\",\"%@\")",@"Mathan",mydata]; NSLog(@"mydata %@",mydata); sqlite3_stmt *addStatement; const char *insert_stmt=[insertSQL UTF8String]; if (sqlite3_open(dbpath,&contactDB)==SQLITE_OK) { sqlite3_prepare_v2(contactDB,insert_stmt,-1,&addStatement,NULL); if (sqlite3_step(addStatement)==SQLITE_DONE) { sqlite3_bind_blob(addStatement,1, [mydata bytes], [mydata length], SQLITE_TRANSIENT); NSLog(@"Data saved"); } else{ NSLog(@"Some Error occured"); } sqlite3_close(contactDB); } else{ NSLog(@"Failure"); } have written some codes to retrive the data sqlite3_stmt *statement; if (sqlite3_open([databasePath UTF8String], &contactDB) == SQLITE_OK) { NSString *sql = [NSString stringWithFormat:@"SELECT * FROM contacts"]; if (sqlite3_prepare_v2( contactDB, [sql UTF8String], -1, &statement, nil) == SQLITE_OK) { while (sqlite3_step(statement) == SQLITE_ROW) { char *field1 = (char *) sqlite3_column_text(statement, 0); NSString *field1Str = [[NSString alloc] initWithUTF8String: field1]; NSLog(@"UserName %@",field1Str); NSData *data = [[NSData alloc] initWithBytes:sqlite3_column_blob(statement, 1) length:sqlite3_column_bytes(statement, 1)]; UIImage *newImage = [[UIImage alloc]initWithData:data]; NSLog(@"Image OBJ %@",newImage); NSLog(@"Image Data %@",data); } sqlite3_close(contactDB); } } sqlite3_finalize(statement); the problem is in log, inserted NSData object and retrieved NSData Objects are different (printing in log gives different stream) moreover Image OBJ is printed null in log.. Have seen similar questions in stackoverflow. But nothing seems to help. Please give some suggestions to overcome this issue.

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >