Search Results

Search found 10177 results on 408 pages for 'thumbs db'.

Page 355/408 | < Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >

  • Ruby Design Problem for SQL Bulk Inserter

    - by crunchyt
    This is a Ruby design problem. How can I make a reusable flat file parser that can perform different data scrubbing operations per call, return the emitted results from each scrubbing operation to the caller and perform bulk SQL insertions? Now, before anyone gets narky/concerned, I have written this code already in a very unDRY fashion. Which is why I am asking any Ruby rockstars our there for some assitance. Basically, everytime I want to perform this logic, I create two nested loops, with custom processing in between, buffer each processed line to an array, and output to the DB as a bulk insert when the buffer size limit is reached. Although I have written lots of helpers, the main pattern is being copy pasted everytime. Not very DRY! Here is a Ruby/Pseudo code example of what I am repeating. lines_from_file.each do |line| line.match(/some regex/).each do |sub_str| # Process substring into useful format # EG1: Simple gsub() call # EG2: Custom function call to do complex scrubbing # and matching, emitting results to array # EG3: Loop to match opening/closing/nested brackets # or other delimiters and emit results to array end # Add processed lines to a buffer as SQL insert statement @buffer << PREPARED INSERT STATEMENT # Flush buffer when "buffer size limit reached" or "end of file" if sql_buffer_full || last_line_reached @dbc.insert(SQL INSERTS FROM BUFFER) @buffer = nil end end I am familiar with Proc/Lambda functions. However, because I want to pass two separate procs to the one function, I am not sure how to proceed. I have some idea about how to solve this, but I would really like to see what the real Rubyists suggest? Over to you. Thanks in advance :D

    Read the article

  • Retrieving and displaying a contact name from a contact id while using a SimpleCursorAdapter

    - by Henrique
    I am storing information on the phone's contacts using a sqlite database. The fields I am storing are: _id, contact_id, body where _id is the row's id, contact_id is the phone contact's id and body is just a simple text which is the actual information I am storing. I'm using a SimpleCursorAdapter to map the data to the view, like so: Cursor cursor = database.fetchInformation(); // fetch info from DB String[] from = new String[] { CONTACT_ID, BODY }; int[] to = new int[] { R.id.contact, R.id.body }; SimpleCursorAdapter adapter = new SimpleCursorAdapter(this, R.layout.row, cursor, from, to); getListView().setAdapter(adapter); What I would actually want is to get the contact name that is associated with this CONTACT_ID, and have that be shown in the view. I can't seem to find a simple solution. I've tried implementing my own SimpleCursorAdapter and rewriting setViewText(TextView, String) but it didn't work. Also, it just seems overly complicated for such a simple task. Any help? Thanks.

    Read the article

  • Multiple jQuery UI buttons with same ID

    - by Mark Nolan
    There might be a really simple solution to this question, but i just don't see it! I have a page with a list of items. every item has the same jquery ui button (its inside a dialog and adds that item to a list). i identify the item via the parenting DIV holding the DB id. So far so good... The Problem is only the first button on the list works! The second, third etc. buttons don't show any reaction at all. The buttons all have the same id - as the list is dynamic and the same action is triggered with every click. Only the parenting ID changes. Heres the display part: <div id="2"> <div id="56"> <button id="add-audio-file" class="ui-button ui-state-default ui-corner-all">betty_2.mp3</button> </div> </div> <div id="2"> <div id="57"> <button id="add-audio-file" class="ui-button ui-state-default ui-corner-all">betty_3.mp3</button> </div> </div> And here comes the js Part: $('#add-audio-file').click(function() { assetID = $(this).parent('div').attr('id'); pageID = $(this).parent('div').parent('div').attr('id'); $.post( "modules/portfolio/serialize.php", {id : pageID, assetid : assetID, do : "add-audio-file"}, function(data, textStatus, xhr) { $('#dialog-add-audio').dialog('close'); } ); }); I am using jquery 1.4.2 and jquery ui 1.8rc3 Any ideas?

    Read the article

  • Query broke down and left me stranded in the woods

    - by user1290323
    I am trying to execute a query that deletes all files from the images table that do not exist in the filters tables. I am skipping 3,500 of the latest files in the database as to sort of "Trim" the table back to 3,500 + "X" amount of records in the filters table. The filters table holds markers for the file, as well as the file id used in the images table. The code will run on a cron job. My Code: $sql = mysql_query("SELECT * FROM `images` ORDER BY `id` DESC") or die(mysql_error()); while($row = mysql_fetch_array($sql)){ $id = $row['id']; $file = $row['url']; $getId = mysql_query("SELECT `id` FROM `filter` WHERE `img_id` = '".$id."'") or die(mysql_error()); if(mysql_num_rows($getId) == 0){ $IdQue[] = $id; $FileQue[] = $file; } } for($i=3500; $i<$x; $i++){ mysql_query("DELETE FROM `images` WHERE id='".$IdQue[$i]."' LIMIT 1") or die("line 18".mysql_error()); unlink($FileQue[$i]) or die("file Not deleted"); } echo ($i-3500)." files deleted."; Output: 0 files deleted. Database contents: images table: 10,000 rows filters table: 63 rows Amount of rows in filters table that contain an images table id: 63 Execution time of php script: 4 seconds +/- 0.5 second Relevant DB structure TABLE: images id url etc... TABLE: filter id img_id (CONTAINS ID FROM images table) etc...

    Read the article

  • How to scale a PHP application (servers, mysql, memcache)

    - by Stéphane Goetz
    Hi, I'm currently creating a website for a social project in switzerland. And before there is an overflow of user, I want to prepare the application to scale. I answered by myself many questions but some are left. I explain what I want to do. First at the beginnning, the Application will have only one server (short time) with DNS, PHP, Mysql, Data, and memcache. Second Then I will split them in two DNS, Mysql, memcache Data, PHP Third Here is the problem, I don't know how to do it exactly here to keep the application running well. I could do : Front : Load Balancer, memcache, DNS Web 1 : PHP, DATA Web 2 : PHP, DATA Mysql This would be the scheme, all PHP sessions are kept in the DB. BUT, how do I sync the data? do I run a Rsync to keep them up to date. do I put them on a separate disk (network disk) to be sure ? but in this case, how can I do in case of user uploads ? and if the website gets more success and we have to go on greater structures, would'nt it create some latency on updates ? or would it be a good thing to go directly to amazon's web services ? some infos I use codeigniter as Framework. I use linux as webserver (distribution not chosen now, but should be Debian) Thanks in advance for your answers.

    Read the article

  • Trying to replace contents of a Div, with no luck

    - by bluedaniel
    Ive tried to use the Dom model with no bloody luck, getElementByID just doesnt work for me. I loathe to resort to a regex but not sure what else to do. The idea is to replace a <div id="content_div"> all sorts </div> with a new <div id="content_div"> NEW ALL SORTS HERE </div> and keep anything that was before or after it in the string. The string is a partial HTML string and more specifically out of the wordpress Posts DB. Any ideas? UPDATE: I tagged this question PHP but probably should of mentioned Im looking for a PHP solution only. Update: Code Example $content = ($wpdb->get_var( "SELECT `post_content` FROM $wpdb->posts WHERE ID = {$article[post_id]}" )); $doc = new DOMDocument(); $doc->validateOnParse = true; $doc->loadHTMLFile($content); $element = $doc->getElementById('div_to_edit'); So Ive tried a whole lot of code and this is what Ive got so far, probably not right but Ive been hacking at it for a little while now.

    Read the article

  • Newsletter send using AJAX to avoid PHP timeout

    - by simPod
    I need to send newsletters. I have already a PHP script that sends mass emails but it won't work for long as email database is growing because of PHP max script run time. So, to avoid it I came up with a solution: I would call my PHP script using AJAX in javascript and I will give it $_GET parameter with a count 20 so the script would sent only 20 emails. Than AJAX would receive success response, and call my script again and again till all emails are send. Is it possible? I'm asking because I have never seen such a solution so I'm wondering if it is real (It's kinda hard to implement this into my PHP framework so I'm asking experts here first) To sum it up here's a code skeleton: <script> var emailCount = 1000; //would get this from DB var runCount = 20; //number of emails sent in one cycle var from = 0; //start number function sendMail(){ if(from<emailCount){ jQuery.ajaxfunction({ path: 'script.php?from='+from+'&count='+runCount successFc: function(){ from+=runCount; sendMail(); } }) } } sendMail(); </script> So, are there any obstacles? Thanks a lot.

    Read the article

  • Effective Method to Manage and Search Through 100,000+ Objects Instantly? (C#)

    - by Kirk
    I'm writing a media player for enthusiasts with large collections (over 100,000 tracks) and one of my main goals is speed in search. I would like to allow the user to perform a Google-esque search of their entire music collection based on these factors: Song Path and File Name Items in ID3 Tag (Title, Artist, Album, etc.) Lyrics What is the best way for me to store this data and search through it? Currently I am storing each track in an object and iterating over an array of these objects checking each of their variables for string matches based on given search text. I've run into problems though where my search is not effective because it is always a phrase search and I'm not sure how to make it more fuzzy. Would an internal DB like SQLlite be faster than this? Any ideas on how I should structure this system? I also need playlist persistence, so that when they close the app and open the app their same playlist loads immediately. How should I store the playlist information so it can load quickly when the application starts? Currently I am JSON encoding the entire playlist, storing it in a text file, and reading it into the ListView at runtime, but it is getting sluggish over 20,000 tracks. Thanks!

    Read the article

  • How do I query SQLite Database in Android?

    - by kunjaan
    I successfully created the Database and inserted a row however I cannot Query it for some reason. My Droid crashes everytime. String DATABASE_NAME = "myDatabase.db"; String DATABASE_TABLE = "mainTable"; String DATABASE_CREATE = "create table if not exists " + DATABASE_TABLE + " ( value VARCHAR not null);"; SQLiteDatabase myDatabase = null; myDatabase = openOrCreateDatabase(DATABASE_NAME, Context.MODE_PRIVATE, null); myDatabase.execSQL(DATABASE_CREATE); // Create a new row of values to insert. ContentValues newValues = new ContentValues(); // Assign values for each row. newValues.put("value", "kunjan"); // Insert the row into your table myDatabase.insert(DATABASE_TABLE, null, newValues); String[] result_columns = new String[] { "value" }; Cursor allRows = myDatabase.query(true, DATABASE_TABLE, result_columns, null, null, null, null, null, null); if (allRows.moveToFirst()) { String value = allRows.getString(1); TextView foo = (TextView) findViewById(R.id.TextView01); foo.setText(value); } allRows.close(); myDatabase.close();

    Read the article

  • Setting Class-Level Variable to Use Between Event Handlers

    - by lush
    I'm having a hard time understanding why the following code doesn't work. I'm sure it's something remedial that I'm missing or not understanding. I currently have a page that asks for user input. If, based on the input and logged in user, I find data from this page already in the database, I need to update the existing records rather than creating new ones, so I set a class-level bool to true. The problem is, when MyNextButton is clicked, PreviouslySubmitted is still false. So, I'm not sure how to make the value of this variable persist. Any advice is appreciated, thanks. public partial class MyForm : System.Web.UI.Page { private bool previouslySubmitted; protected void Page_Load(object sender, EventArgs e) { MyButton.Click += (o, i) => { q = from a in db.TableA where (a.SomeField == SomeValue) select a; if(q.Any()) { PreviouslySubmitted = true; //populate the form's fields with values from database for user to revise } } MyNextButton.Click += (o, i) => { if(PreviouslySubmitted) { //update database } else { //insert into database } }

    Read the article

  • Structuring input of data for localStorage

    - by WmasterJ
    It is nice when there isn't a DB to maintain and users to authenticate. My professor has asked me to convert a recent research project of his that uses Bespin and calculates errors made by users in a code editor as part of his research. The goal is to convert from MySQL to using HTML5 localStorage completely. Doesn't seem so hard to do, even though digging in his code might take some time. Question: I need to store files and state (last placement of cursor and active file). I have already done so by implementing the recommendations in another stackoverflow thread. But would like your input considering how to structure the content to use. My current solution Hashmap like solution with javascript objects: files = {}; // later, saving files[fileName] = data; And then storing in localStorage using some recommendations localStorage.setObject(files): Currently I'm also considering using some type of numeric id. So that names can be changed without any hassle renaming the key in the hashmap. What is your opinion on the way it is solved and would you do it any differently?

    Read the article

  • Passing arguments and values from HTML to jQuery (events)

    - by Jaroslav Moravec
    What is the practice to pass arguments from HTML to jQuery events function. For example getting id of row from db: <tr class="jq_killMe" id="thisItemId-id"> ... </tr> and jQuery: $(".jq_killMe").click(function () { var tmp = $(this).attr('id).split("-"); var id = tmp[0] // ... } What's the best practise, if I want to pass more than one argument? Is it better not to use jQuery? For example: <tr onclick="killMe('id')"> ... </tr> I didn't find the answer on my question, I will be glad even for links. Thanks. Edit (pre solution) So you suggested two methods to do that: Add custom attributes to element (XHTML) Use attribute ID and parse it by regex Attribute data-* attributes in HTML5 Use hidden children elements I like first solution, but... I would like to (I have to (employer)) produce valid code. Here is a nice question and answers: http://stackoverflow.com/questions/994856/so-what-if-custom-html-attributes-arent-valid-xhtml And the second is not so pretty as the first, but valid. So the compromise is... The third is the solution for future, but here is a lot of CMS where we have to use XHTML or HTML4. (And HTML5 is the long process)

    Read the article

  • Error when loading YAML config files in Rails

    - by ZelluX
    I am configuring Rails with MongoDB, and find a strange problem when paring config/mongo.yml file. config/mongo.yml is generated by executing script/rails generate mongo_mapper:config, and it looks like following: defaults: &defaults host: 127.0.0.1 port: 27017 development: <<: *defaults database: tc_web_development test: <<: *defaults database: tc_web_test From the config file we can see the objects development and test should both have a database field. But when it is parsed and loaded in config/initializers/mongo.db, config = YAML::load(File.read(Rails.root.join('config/mongo.yml'))) puts config.inspect MongoMapper.setup(config, Rails.env) the strange thing comes: the output of puts config.inspect is {"defaults"=>{"host"=>"127.0.0.1", "port"=>27017}, "development"=>{"host"=>"127.0.0.1", "port"=>27017}, "test"=>{"host"=>"127.0.0.1", "port"=>27017}} which does not contain database attribute. But when I execute the same statements in a plain ruby console, instead of using rails console, mongo.yml is parsed in a right way. {"defaults"=>{"host"=>"127.0.0.1", "port"=>27017}, "development"=>{"host"=>"127.0.0.1", "port"=>27017, "database"=>"tc_web_development"}, "test"=>{"host"=>"127.0.0.1", "port"=>27017, "database"=>"tc_web_test"}} I am wondering what may be the cause of this problem. Any ideas? Thanks.

    Read the article

  • C# : Forcing a clean run in a long running SQL reader loop?

    - by Wardy
    I have a SQL data reader that reads 2 columns from a sql db table. once it has done its bit it then starts again selecting another 2 columns. I would pull the whole lot in one go but that presents a whole other set of challenges. My problem is that the table contains a large amount of data (some 3 million rows or so) which makes working with the entire set a bit of a problem. I'm trying to validate the field values so i'm pulling the ID column then one of the other cols and running each value in the column through a validation pipeline where the results are stored in another database. My problem is that when the reader hits the end of handlin one column I need to force it to immediately clean up every little block of ram used as this process uses about 700MB and it has about 200 columns to go through. Without a full Garbage Collect I will definately run out of ram. Anyone got any ideas how I can do this? I'm using lots of small reusable objects, my thought was that I could just call GC.Collect() on the end of each read cycle and that would flush everything out, unfortunately that isn't happening for some reason.

    Read the article

  • Advice Please: SQL Server Identity vs Unique Identifier keys when using Entity Framework

    - by c.batt
    I'm in the process of designing a fairly complex system. One of our primary concerns is supporting SQL Server peer-to-peer replication. The idea is to support several geographically separated nodes. A secondary concern has been using a modern ORM in the middle tier. Our first choice has always been Entity Framework, mainly because the developers like to work with it. (They love the LiNQ support.) So here's the problem: With peer-to-peer replication in mind, I settled on using uniqueidentifier with a default value of newsequentialid() for the primary key of every table. This seemed to provide a good balance between avoiding key collisions and reducing index fragmentation. However, it turns out that the current version of Entity Framework has a very strange limitation: if an entity's key column is a uniqueidentifier (GUID) then it cannot be configured to use the default value (newsequentialid()) provided by the database. The application layer must generate the GUID and populate the key value. So here's the debate: abandon Entity Framework and use another ORM: use NHibernate and give up LiNQ support use linq2sql and give up future support (not to mention get bound to SQL Server on DB) abandon GUIDs and go with another PK strategy devise a method to generate sequential GUIDs (COMBs?) at the application layer I'm leaning towards option 1 with linq2sql (my developers really like linq2[stuff]) and 3. That's mainly because I'm somewhat ignorant of alternate key strategies that support the replication scheme we're aiming for while also keeping things sane from a developer's perspective. Any insight or opinion would be greatly appreciated.

    Read the article

  • treeview dynamically populated

    - by Laziale
    Hello everyone - I have this treeview control where I want to put uploaded files on the server. I want to be able to create the nodes and the child nodes dynamically from the database. I am using this query for getting the data from DB: SELECT c.Category, d.DocumentName FROM Categories c INNER JOIN DocumentUserFile d ON c.ID = d.CategoryId WHERE d.UserId = '9rge333a-91b5-4521-b3e6-dfb49b45237c' The result from that query is this one: Agendas transactions.pdf Minutes accounts.pdf I want to have the treeview sorted that way too. I am trying with this piece of code: TreeNode tn = new TreeNode(); TreeNode tnSub = new TreeNode(); foreach (DataRow dt in tblTreeView.Rows) { tn.Text = dt[0].ToString(); tn.Value = dt[0].ToString(); tnSub.Text = dt[1].ToString(); tnSub.NavigateUrl = "../downloading.aspx?file=" + dt[1].ToString() +"&user=" + userID; tn.ChildNodes.Add(tnSub); tvDocuments.Nodes.Add(tn); } I am getting the treeview populated nicely for the 1st category and the document under that category, but I can't get it to work when I want to show more documents under that category, or even more complicate to show new category beneath the 1st one with documents from that category. How can I solve this? I appreciate the answers a lot. Thanks, Laziale

    Read the article

  • How do I do MongoDB console-style queries in PHP?

    - by Zoe Boles
    I'm trying to get a MongoDB query from the javascript console into my PHP app. What I'm trying to avoid is having to translate the query into the PHP "native driver"'s format... I don't want to hand build arrays and hand-chain functions any more than I want to manually build an array of MySQL's internal query structure just to get data. I already have a string producing the exact content I want in the Mongo console: db.intake.find({"processed": {"$exists": "false"}}).sort({"insert_date": "1"}).limit(10); The question is, is there a way for me to hand this string, as is, to MongoDB and have it return a cursor with the dataset I request? Right now I'm at the "write your own parser because it's not valid json to kinda turn a subset of valid Mongo queries into the format the PHP native driver wants" state, which isn't very fun. I don't want an ORM or a massive wrapper library; I just want to give a function my query string as it exists in the console and get an Iterator back that I can work with. I know there are a couple of PHP-based Mongo manager applications that apparently take console-style queries and handle them, but initial browsing through their code, I'm not sure how they handle the translation. I absolutely love working with mongo in the console, but I'm rapidly starting to loathe the thought of converting every query into the format the native writer wants...

    Read the article

  • Moving from SVN to HG : branching and backup

    - by rorycl
    My company runs svn right now and we are very familiar with it. However, because we do a lot of concurrent development, merging can become very complicated.. We've been playing with hg and we really like the ability to make fast and effective clones on a per-feature basis. We've got two main issues we'd like to resolve before we move to hg: Branches for erstwhile svn users I'm familiar with the "4 ways to branch in Mercurial" as set out in Steve Losh's article. We think we should "materialise" the branches because I think the dev team will find this the most straightforward way of migrating from svn. Consequently I guess we should follow the "branching with clones" model which means that separate clones for branches are made on the server. While this means that every clone/branch needs to be made on the server and published separately, this isn't too much of an issue for us as we are used to checking out svn branches which come down as separate copies. I'm worried, however, that merging changes and following history may become difficult between branches in this model. Backup If programmers in our team make local clones of a branch, how do they backup the local clone? We're used to seeing svn commit messages like this on a feature branch "Interim commit: db function not yet working". I can't see a way of doing this easily in hg. Advice gratefully received. Rory

    Read the article

  • Ruby on Rails Mongrel web server stuck when MySQL service is running

    - by Marcos Buarque
    Hi, I am a Ruby on Rails newbie and already have a problem. I have started the Mongrel web server and it works fine when MySQL service isn't running. But when MySQL is on, Mongrel stucks. It ceases from serving the pages. So far, I have tested the localhost:3000 URL. When MySQL is off, it serves the page. When I click "about application's environment", I get the messasge (of course) "Can't connect to MySQL server on 'localhost' (10061)". After starting the MySQL service and refreshing, I get no more answer and Mongrel does not serve the webpage. It gets stuck with no answer to the browser. Then I have to stop the webserver and restart it. I have installed mysql2 gem with the command gem install mysql2. I was able to create the _test and _development databases with the command line rake db:create. I have tested with MySQL root user and blank password and also tried with a superuser user I have created. No success. Here is the server log: ======================== Started GET "/rails/info/properties" for 127.0.0.1 at Fri Dec 24 17:41:25 -0200 2010 Mysql2::Error (Can't connect to MySQL server on 'localhost' (10061)): Rendered C:/Ruby187/lib/ruby/gems/1.8/gems/actionpack-3.0.3/lib/action_dispatch/middleware/templates/rescues/_trace.erb (1.0ms) Rendered C:/Ruby187/lib/ruby/gems/1.8/gems/actionpack-3.0.3/lib/action_dispatch/middleware/templates/rescues/_request_and_response.erb (5.0ms) Rendered C:/Ruby187/lib/ruby/gems/1.8/gems/actionpack-3.0.3/lib/action_dispatch/middleware/templates/rescues/diagnostics.erb within rescues/layout (35.0ms) ================= I am running on a Windows 7 environment with firewall down.

    Read the article

  • Odd 'UNION' behavior in an Oracle SQL query

    - by RenderIn
    Here's my query: SELECT my_view.* FROM my_view WHERE my_view.trial in (select 2 as trial_id from dual union select 3 from dual union select 4 from dual) and my_view.location like ('123-%') When I execute this query it returns results which do not conform to the my_view.location like ('123-%') condition. It's as if that condition is being ignored completely. I can even change it to my_view.location IS NULL and it returns the same results, despite that field being not-nullable. I know this query seems ridiculous with the selects from dual, but I've structured it this way to replicate a problem I have when I use a 'WITH' clause (the results of that query are where the selects from dual inline view are). I can modify the query like so and it returns the expected results: SELECT my_view.* FROM my_view WHERE my_view.trial in (2, 3, 4) and my_view.location like ('123-%') Unfortunately I do not know the trial values up front (they are queried for in a 'WITH' clause) so I cannot structure my query this way. What am I doing wrong? I will say that the my_view view is composed of 3 other views whose results are UNION ALL and each of which retrieve some data over a DB Link. Not that I believe that should matter, but in case it does.

    Read the article

  • Django: Filtering datetime field by *only* the year value?

    - by unclaimedbaggage
    Hi folks, I'm trying to spit out a django page which lists all entries by the year they were created. So, for example: 2010: Note 4 Note 5 Note 6 2009: Note 1 Note 2 Note 3 It's proving more difficult than I would have expected. The model from which the data comes is below: class Note(models.Model): business = models.ForeignKey(Business) note = models.TextField() created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) class Meta: db_table = 'client_note' @property def note_year(self): return self.created.strftime('%Y') def __unicode__(self): return '%s' % self.note I've tried a few different ways, but seem to run into hurdles down every path. I'm guessing an effective 'group by' method would do the trick (PostGres DB Backend), but I can't seem to find any Django functionality that supports it. I tried getting individual years from the database but I struggled to find a way of filtering datetime fields by just the year value. Finally, I tried adding the note_year @property but because it's derived, I can't filter those values. Any suggestions for an elegant way to do this? I figure it should be pretty straightforward, but I'm having a heckuva time with it. Any ideas much appreciated.

    Read the article

  • How to submit Nothing as a route value to ASP MVC

    - by Adam
    I have a route with several optional parameters. These are possible search terms in different fields. So, for example, if I have fields key, itemtype and text then I have in global.asax: routes.MapRoute( _ "Search", _ "Admin.aspx/Search/{Key}/{ItemType}/{Text}", _ New With {.controller = "Admin", .action = "Search" .Key = Nothing, .ItemType = Nothing, .Text = Nothing} _ ) My action takes optional parameters: Function Search(Optional ByVal Key As String = Nothing, _ Optional ByVal ItemType As Integer = 0, _ Optional ByVal Text As String = Nothing, _ Optional ByVal OtherText As String = Nothing) It then checks if the Key and Text strings have a non-null (and non-empty) value and adds search terms to the db request as needed. However, is it possible to send in a null value for, for example, Key but still send in a value for Text? If so, what does the URL look like? (Admin.aspx/Search//0/Foo doesn't work :) ) I know I can handle this using a parameter array instead, but wondered if this was possible using the sort of route described? I could of course define some other value as equivalent to null (for example, a space/%20) but is there any way to send a null value in the URL? I'm suspecting not, but thought I'd see if anyone knew of one. I'm using ASP MVC 2 for this project.

    Read the article

  • Cryptography: best practices for keys in memory?

    - by Johan
    Background: I got some data encrypted with AES (ie symmetric crypto) in a database. A server side application, running on a (assumed) secure and isolated Linux box, uses this data. It reads the encrypted data from the DB, and writes back encrypted data, only dealing with the unencrypted data in memory. So, in order to do this, the app is required to have the key stored in memory. The question is, is there any good best practices for this? Securing the key in memory. A few ideas: Keeping it in unswappable memory (for linux: setting SHM_LOCK with shmctl(2)?) Splitting the key over multiple memory locations. Encrypting the key. With what, and how to keep the...key key.. secure? Loading the key from file each time its required (slow and if the evildoer can read our memory, he can probably read our files too) Some scenarios on why the key might leak: evildoer getting hold of mem dump/core dump; bad bounds checking in code leading to information leakage; The first one seems like a good and pretty simple thing to do, but how about the rest? Other ideas? Any standard specifications/best practices? Thanks for any input!

    Read the article

  • ASP.Net Gridview paging, pageindex always == 0.

    - by David Archer
    Hi all, Having a slight problem with my ASP.Net 3.5 app. I'm trying to get the program to pick up what page number has been clicked. I'm using ASP.Net's built in AllowPaging="True" function. It's never the same without code, so here it is: ASP.Net: <asp:GridView ID="GridView1" runat="server" CellPadding="4" ForeColor="#333333" GridLines="Vertical" Width="960px" AllowSorting="True" EnableSortingAndPagingCallbacks="True" AllowPaging="True" PageSize="25" > <RowStyle BackColor="#F7F6F3" ForeColor="#333333" /> <FooterStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <PagerStyle BackColor="#284775" ForeColor="White" HorizontalAlign="Center" /> <SelectedRowStyle BackColor="#E2DED6" Font-Bold="True" ForeColor="#333333" /> <HeaderStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <EditRowStyle BackColor="#999999" /> <AlternatingRowStyle BackColor="White" ForeColor="#284775" /> </asp:GridView> C#: var fillTable = from ft in db.IncidentDatas where ft.pUserID == Convert.ToInt32(ClientList.SelectedValue.ToString()) select new { Reference = ft.pRef.ToString(), Date = ft.pIncidentDateTime.Value.Date.ToShortDateString(), Time = ft.pIncidentDateTime.Value.TimeOfDay, Premesis = ft.pPremises.ToString(), Latitude = ft.pLat.ToString(), Longitude = ft.pLong.ToString() }; if (fillTable.Count() > 0) { GridView1.DataSource = fillTable; GridView1.DataBind(); var IncidentDetails = fillTable.ToList(); for (int i = 0; i < IncidentDetails.Count(); i++) { int pageno = GridView1.PageIndex; int pagenostart = pageno * 25; if (i >= pagenostart && i < (pagenostart + 25)) { //Processing } } } Any idea why GridView1.PageIndex is always = 0? The thing is, the processing works correctly for the grid view.... it will always go to the correct paging page, but it's always 0 when I try to get the number. Help!

    Read the article

  • Setting up relations/mappings for a SQLAlchemy many-to-many database

    - by Brent Ramerth
    I'm new to SQLAlchemy and relational databases, and I'm trying to set up a model for an annotated lexicon. I want to support an arbitrary number of key-value annotations for the words which can be added or removed at runtime. Since there will be a lot of repetition in the names of the keys, I don't want to use this solution directly, although the code is similar. My design has word objects and property objects. The words and properties are stored in separate tables with a property_values table that links the two. Here's the code: from sqlalchemy import Column, Integer, String, Table, create_engine from sqlalchemy import MetaData, ForeignKey from sqlalchemy.orm import relation, mapper, sessionmaker from sqlalchemy.ext.declarative import declarative_base engine = create_engine('sqlite:///test.db', echo=True) meta = MetaData(bind=engine) property_values = Table('property_values', meta, Column('word_id', Integer, ForeignKey('words.id')), Column('property_id', Integer, ForeignKey('properties.id')), Column('value', String(20)) ) words = Table('words', meta, Column('id', Integer, primary_key=True), Column('name', String(20)), Column('freq', Integer) ) properties = Table('properties', meta, Column('id', Integer, primary_key=True), Column('name', String(20), nullable=False, unique=True) ) meta.create_all() class Word(object): def __init__(self, name, freq=1): self.name = name self.freq = freq class Property(object): def __init__(self, name): self.name = name mapper(Property, properties) Now I'd like to be able to do the following: Session = sessionmaker(bind=engine) s = Session() word = Word('foo', 42) word['bar'] = 'yes' # or word.bar = 'yes' ? s.add(word) s.commit() Ideally this should add 1|foo|42 to the words table, add 1|bar to the properties table, and add 1|1|yes to the property_values table. However, I don't have the right mappings and relations in place to make this happen. I get the sense from reading the documentation at http://www.sqlalchemy.org/docs/05/mappers.html#association-pattern that I want to use an association proxy or something of that sort here, but the syntax is unclear to me. I experimented with this: mapper(Word, words, properties={ 'properties': relation(Property, secondary=property_values) }) but this mapper only fills in the foreign key values, and I need to fill in the other value as well. Any assistance would be greatly appreciated.

    Read the article

< Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >