Search Results

Search found 10602 results on 425 pages for 'media queries'.

Page 169/425 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • sIFR displays only the first line of text in Opera when on transparent background

    - by Gary
    I have implemented sIFR for the first time, on a test page. The code I have is below. It works fine in IE7, Firefox, Safari and Chrome, but in Opera only the first line of sIFR-ed text appears when the page first loads and after refreshing the page. But, if I scroll the page, all the text appears! It seems to have to do with transparency, because if I turn transparency off, it works fine. Please can someone help me to make this work? Thanks, Gary <link rel="stylesheet" href="sIFR-print.css" type="text/css" media="print" /> <link rel="stylesheet" href="all.css" type="text/css" media="all" /> <script src="sifr.js" type="text/javascript"></script> <script src="sifr-config.js" type="text/javascript"></script> <script type="text/javascript"> //<![CDATA[ var sIFRfont = { src: 'fontname.swf' }; sIFR.activate(sIFRfont); sIFR.replace(sIFRfont, { css: [ '.sIFR-root { line-height: 1em; font-size: 64px; color: #000000; background-color: blue; text-align: left; font-weight: normal; font-style: normal; text-decoration: none; visibility: hidden; }' ], fitExactly : true, forceClear : true, forceSingleLine : false, selector : 'div.flashtext', transparent : true }); //]]> </script>

    Read the article

  • Storing data on SD Card in Android

    - by BBoom
    Using the guide at Android Developers (http://developer.android.com/guide/topics/data/data-storage.html) I've tried to store some data to the SD-Card. This is my code: // Path to write files to String path = Environment.getExternalStorageDirectory().getAbsolutePath() + "/Android/data/"+ctxt.getString(R.string.package_name)+"/files/"; String fname = "mytest.txt"; // Current state of the external media String extState = Environment.getExternalStorageState(); // External media can be written onto if (extState.equals(Environment.MEDIA_MOUNTED)) { try { // Make sure the path exists boolean exists = (new File(path)).exists(); if (!exists){ new File(path).mkdirs(); } // Open output stream FileOutputStream fOut = new FileOutputStream(path + fname); fOut.write("Test".getBytes()); // Close output stream fOut.flush(); fOut.close(); } catch (IOException ioe) { ioe.printStackTrace(); } When I create the new FileOutputStream I get a FileNotFound exception. I have also noticed that "mkdirs()" does not seem to create the directory. Can anyone tell me what I'm doing wrong? I'm testing on an AVD with a 2GB sd card and "hw.sdCard: yes", the File Explorer of DDMS in Eclipse tells me that the only directory on the sdcard is "LOST.DIR".

    Read the article

  • Silverlight 3 - How to "refresh" a DataGrid content?

    - by Josimari Martarelli
    I have the following scenery: 1 using System; 2 using System.Windows; 3 using System.Windows.Controls; 4 using System.Windows.Documents; 5 using System.Windows.Ink; 6 using System.Windows.Input; 7 using System.Windows.Media; 8 using System.Windows.Media.Animation; 9 using System.Windows.Shapes; 10 using System.Collections.Generic; 11 12 namespace refresh 13 { 14 public partial class MainPage : UserControl 15 { 16 17 List c = new List(); 18 19 public MainPage() 20 { 21 // Required to initialize variables 22 InitializeComponent(); 23 c.Add(new Customer{ _nome = "Josimari", _idade = "29"}); 24 c.Add(new Customer{_nome = "Wesley", _idade = "26"}); 25 c.Add(new Customer{_nome = "Renato",_idade = "31"}); 26 27 this.dtGrid.ItemsSource = c; 28 } 29 30 private void Button_Click(object sender, System.Windows.RoutedEventArgs e) 31 { 32 c.Add(new Customer{_nome = "Maiara",_idade = "18"}); 33 } 34 35 } 36 37 public class Customer 38 { 39 public string _nome{get; set;} 40 public string _idade{get; set;} 41 } 42 } Where, dtGrid is my DataGrid control... The Question is: How to get the UI Updated after adding one more register to my list. I get to solve it setting the DataGrid's Item Source to "" and then setting to the list of Customer objects again, like that: 1 private void Button_Click(object sender, System.Windows.RoutedEventArgs e) 2 3 { 4 5 c.Add(new Customer{_nome = "Maiara",_idade = "18"}); 6 7 this.dtGrid.ItemsSource=""; 8 9 this.dtGrid.ItemsSource=c; 10 11 } 12 Is there a way to get the UI updated or the datagrid's itemsSource refreshed automatically after updating, altering or deleting an item from the list c ? Thank you, Josimari Martarelli

    Read the article

  • git push error 'remote rejected] master -> master (branch is currently checked out)'

    - by hap497
    Hi, Yesterday, I post a question regarding how to clone a git repository from 1 of my machine to another. http://stackoverflow.com/questions/2808177/how-can-i-git-clone-from-another-machine/2809612#2809612 I am able to successfully clone a git repository from my src (192.168.1.2) to my dest (192.168.1.1). But when I did an edit to a file and then do a 'git commit -a -m "test"' and then do a git push. I get this error on my dest (192.168.1.1): git push [email protected]'s password: Counting objects: 21, done. Compressing objects: 100% (11/11), done. Writing objects: 100% (11/11), 1010 bytes, done. Total 11 (delta 9), reused 0 (delta 0) error: refusing to update checked out branch: refs/heads/master error: By default, updating the current branch in a non-bare repository error: is denied, because it will make the index and work tree inconsistent error: with what you pushed, and will require 'git reset --hard' to match error: the work tree to HEAD. error: error: You can set 'receive.denyCurrentBranch' configuration variable to error: 'ignore' or 'warn' in the remote repository to allow pushing into error: its current branch; however, this is not recommended unless you error: arranged to update its work tree to match what you pushed in some error: other way. error: error: To squelch this message and still keep the default behaviour, set error: 'receive.denyCurrentBranch' configuration variable to 'refuse'. To git+ssh://[email protected]/media/LINUXDATA/working ! [remote rejected] master -> master (branch is currently checked out) error: failed to push some refs to 'git+ssh://[email protected]/media/LINUXDATA/working' I have 2 version of git, will that causes this problem? I have git 1.7 on 192.168.1.2 (src) but git 1.5 on 192.168.1.1 (dest). I appreciate if someone can help me with this. Thank you.

    Read the article

  • SQL Server 2005 and 'General network error'

    - by Mariusz
    I know there is a lot of information in the Internet about solving this problem, but it didn't help me. My Delphi application uses dbExpress controls to access the database and execute SQL queries. Once every couple of days, however, it stops working because the database connection fails. This happens on several different computers with different versions of Windows. MSSQL Server 2005 (version 9.0.4035) is installed on each of them. The above mentioned application executes queries every couple of seconds, and they are mainly insert commands. Every couple of days I get a series of exceptions like the following one: [DBNETLIB][ConnectionOpen (PreLoginHandshake()).]General network error. Check your network documentation. And then the SQL server becomes inaccessible until I restart it manually. The information I found in the Internet say that I should install some service packs, change some registry entries etc., but believe me, none of these helps and I don't know what else to do now. Could you please help me solve this problem? Any clues or ideas? I can give you some more information about the server or the application if necessary. Thank you very much in advance.

    Read the article

  • Detecting which MCUs to connect on an incoming conference

    - by Fábio Batista
    Hello, SO. I'm working with the OCS UCCAPI, developing a custom OCS client. I'm currently having a hard time detecting what "kind" of Conference my client is being invited to. Using the Office Communicator client, I can start "IM conferences" (by inviting more than 1 person and selecting "start a IM conversation") or "video conferences" (by selecting more than 1 person and selecting "start a video call"). The Office Communicator client, on the invitees' end, starts correctly the appropriate session (just IM, just Video or IM+Video). However, when receiving the conference invite on my custom client, there's no data about the kind of session I'm being invited. I need this information, in order to make a decision whether or not to connect to the AV MCU and capture/show video. I've tried already: When handling _IUccSessionManagerEvents.OnIncomingSession, parse the RemoteSessionDescription property on the UccIncomingInvitationEvent object: no luck, the only data about the conference modality is an element on the XML about the IM being enabled or not (<im available="true"> or <im available="false">), but nothing about the session having video available or not. When handling _IUccConferenceSessionEvents.OnEnter, check the Media property on the UccConferenceSession. Don't work, all media types are present (MESSAGE, AUDIO, VIDEO, DATA e TELEPHONY), regardless of the type of conference I'm being invited. Also when handling _IUccConferenceSessionEvents.OnEnter, check the Entities collection on the UccConferenceView object, to check which MCUs are enabled for this conference. Don't work either, all MUCs are listed as available (IM, AV, DATA and CONTROL), regardless of the type of conference I'm being invited. I'm running out of ideas. Some references I'm using: http://msdn.microsoft.com/en-us/library/bb664307.aspx http://msdn.microsoft.com/en-us/library/dd170830.aspx Thanks a lot.

    Read the article

  • MySQL Unique hash insertion

    - by Jesse
    So, imagine a mysql table with a few simple columns, an auto increment, and a hash (varchar, UNIQUE). Is it possible to give mysql a query that will add a column, and generate a unique hash without multiple queries? Currently, the only way I can think of to achieve this is with a while, which I worry would become more and more processor intensive the more entries were in the db. Here's some pseudo-php, obviously untested, but gets the general idea across: while(!query("INSERT INTO table (hash) VALUES (".generate_hash().");")){ //found conflict, try again. } In the above example, the hash column would be UNIQUE, and so the query would fail. The problem is, say there's 500,000 entries in the db and I'm working off of a base36 hash generator, with 4 characters. The likelyhood of a conflict would be almost 1 in 3, and I definitely can't be running 160,000 queries. In fact, any more than 5 I would consider unacceptable. So, can I do this with pure SQL? I would need to generate a base62, 6 char string (like: "j8Du7X", chars a-z, A-Z, and 0-9), and either update the last_insert_id with it, or even better, generate it during the insert. I can handle basic CRUD with MySQL, but even JOINs are a little outside of my MySQL comfort zone, so excuse my ignorance if this is cake. Any ideas? I'd prefer to use either pure MySQL or PHP & MySQL, but hell, if another language can get this done cleanly, I'd build a script and AJAX it too. Thanks!

    Read the article

  • "Executing SQL directly; no cursor" error when using SCOPE_IDENTITY

    - by Chris
    There wasn't much on google about this error, so I'm askin here. I'm switching a PHP web application from using MySQL to SQL Server 2008 (using ODBC, not php_mssql). Running queries or anything else isn't a problem, but when I try to do scope_identity (or any similar functions), I get the error "Executing SQL directly; no cursor". I'm doing this immediately after an insert, so it should still be in scope. Running the same insert statement then query for the insert ID works fine in SQL Server Management Studio. Here's my code right now (everything else in the database wrapper class works fine for other queries, so I'll assume it isn't relevant right now): function insert_id(){ $x = $this->query_first("SELECT SCOPE_IDENTITY('session_log') as insert_id"); echo "($x)"; return $x; } query_first being a function that returns the first result from the first field of a query (basically the equivalent of execute_scalar() on .net). The full error message: Warning: odbc_exec() [function.odbc-exec]: SQL error: [Microsoft][SQL Server Native Client 10.0][SQL Server]Executing SQL directly; no cursor., SQL state 01000 in SQLExecDirect in C:[...]\Database_MSSQL.php on line 110

    Read the article

  • How to retrieve a pixel in a tiff image (loaded with JAI)?

    - by Ed Taylor
    I'm using a class (DisplayContainer) to hold a RenderedOp-image that should be displayed to the user: RenderedOp image1 = JAI.create("tiff", params); DisplayContainer d = new DisplayContainer(image1); JScrollPane jsp = new JScrollPane(d); // Create a frame to contain the panel. Frame window = new Frame(); window.add(jsp); window.pack(); window.setVisible(true); The class DisplayContainer looks like this: import java.awt.event.MouseEvent; import java.awt.geom.AffineTransform; import javax.media.jai.RenderedOp; import com.sun.media.jai.widget.DisplayJAI; public class DisplayContainer extends DisplayJAI { private static final long serialVersionUID = 1L; private RenderedOp img; // Affine tranform private final float ratio = 1f; private AffineTransform scaleForm = AffineTransform.getScaleInstance(ratio, ratio); public DisplayContainer(RenderedOp img) { super(img); this.img = img; addMouseListener(this); } public void mouseClicked(MouseEvent e) { System.out.println("Mouseclick at: (" + e.getX() + ", " + e.getY() + ")"); // How to retrieve the RGB-value of the pixel where the click took // place? } // OMISSIONS } What I would like to know is how the RGB value of the clicked pixel can be obtained?

    Read the article

  • Browser Based Streaming Video/Audio (not progressive download)

    - by Josh
    Hello, I am trying to understand conceptually the best way to deliver real streaming audio and video content. I would want it to be consumed with a web browser, utilizing the least amount of proprietary technology. I wouldn't be serving static files and using progressive download, this would be real audio streams being captured live. How does one broadcast a stream that will be reasonably in sync with the source? What kind of protocol is suitable? Edit: In research I've found that there are a few protocols: RTSP, HTTP Streaming, RTMP, and RTP. HTTP streaming is somewhat unsuitable if you are streaming a live performance/communication of some kind because it relies on TCP (as its HTTP based) and you don't lose packets. In a low bandwidth situation, the client can get significantly behind in playback. ref RTMP is a proprietary technology, requiring flash media server. Crap on that. The reason I looked at flash is because they are extremely flexible as far as user experience goes. SoundManager2 provides an excellent javascript interface for playing media with flash. This is what I would look for in a client application. RTSP/RTP is what Microsoft switched to using, deprecating their MMS protocol. RTSP is the control protocol. Its similar to HTTP with a few distinct difference -- server can also talk to the client, and there are additional commands, like PAUSE. Its also a stateful protocol, which is maintained with a session id. RTP is the protocol for delivering the payload (encoded audio or video). There are a few open sourced projects, one of them being supported by apple here. It seems like this might do what I want it to, and it looks like quite a few players support it. It sounds like it would be suitable for a "live" broadcast from this page here. Thanks, Josh

    Read the article

  • SQL SERVER 2008 JOIN hints

    - by Nai
    Hi all, Recently, I was trying to optimise this query UPDATE Analytics SET UserID = x.UserID FROM Analytics z INNER JOIN UserDetail x ON x.UserGUID = z.UserGUID Estimated execution plan show 57% on the Table Update and 40% on a Hash Match (Aggregate). I did some snooping around and came across the topic of JOIN hints. So I added a LOOP hint to my inner join and WA-ZHAM! The new execution plan shows 38% on the Table Update and 58% on an Index Seek. So I was about to start applying LOOP hints to all my queries until prudence got the better of me. After some googling, I realised that JOIN hints are not very well covered in BOL. Therefore... Can someone please tell me why applying LOOP hints to all my queries is a bad idea. I read somewhere that a LOOP JOIN is default JOIN method for query optimiser but couldn't verify the validity of the statement? When are JOIN hints used? When the sh*t hits the fan and ghost busters ain't in town? What's the difference between LOOP, HASH and MERGE hints? BOL states that MERGE seems to be the slowest but what is the application of each hint? Thanks for your time and help people! I'm running SQL Server 2008 BTW. The statistics mentioned above are ESTIMATED execution plans.

    Read the article

  • Call Multiple Stored Procedures with the Zend Framework

    - by Brian Fisher
    I'm using Zend Framework 1.7.2, MySQL and the MySQLi PDO adapter. I would like to call multiple stored procedures during a given action. I've found that on Windows there is a problem calling multiple stored procedures. If you try it you get the following error message: SQLSTATE[HY000]: General error: 2014 Cannot execute queries while other unbuffered queries are active. Consider using PDOStatement::fetchAll(). Alternatively, if your code is only ever going to run against mysql, you may enable query buffering by setting the PDO::MYSQL_ATTR_USE_BUFFERED_QUERY attribute. I found that to work around this issue I could just close the connection to the database after each call to a stored procedure: if (strtoupper(substr(PHP_OS, 0, 3)) === 'WIN') { //If on windows close the connection $db->closeConnection(); } This has worked well for me, however, now I want to call multiple stored procedures wrapped in a transaction. Of course, closing the connection isn't an option in this situation, since it causes a rollback of the open transaction. Any ideas, how to fix this problem and/or work around the issue. More info about the work around Bug report about the problem

    Read the article

  • Javascript error when integrating django-tinymce and django-filebrowser

    - by jwesonga
    I've set up django-filebrowser in my app without any bugs, I already had django-tinymce set up and it loads the editor in the admin forms. I now want to use django-filebrowser with django-tinymce, but I keep getting a weird javascript error when I click on "Image URL" in the Image popup: r is undefined the error is js/tiny_mce/tiny_mce.js My settings.py file has the following configuration: TINYMCE_JS_URL=MEDIA_URL + 'js/tiny_mce/tiny_mce.js' TINYMCE_DEFAULT_CONFIG = { 'mode': "textareas", 'theme': "advanced", 'language': "en", 'skin': "o2k7", 'dialog_type': "modal", 'object_resizing': True, 'cleanup_on_startup': True, 'forced_root_block': "p", 'remove_trailing_nbsp': True, 'theme_advanced_toolbar_location': "top", 'theme_advanced_toolbar_align': "left", 'theme_advanced_statusbar_location': "none", 'theme_advanced_buttons1': "formatselect,styleselect,bold,italic,underline,bullist,numlist,undo,redo,link,unlink,image,code,template,visualchars,fullscreen,pasteword,media,search,replace,charmap", 'theme_advanced_buttons2': "", 'theme_advanced_buttons3': "", 'theme_advanced_path': False, 'theme_advanced_blockformats': "p,h2,h3,h4,div,code,pre", 'width': '700', 'height': '300', 'plugins': "advimage,advlink,fullscreen,visualchars,paste,media,template,searchreplace", 'advimage_update_dimensions_onchange': True, 'file_browser_callback': "CustomFileBrowser", 'relative_urls': False, 'valid_elements' : "" + "-p," + "a[href|target=_blank|class]," + "-strong/-b," + "-em/-i," + "-u," + "-ol," + "-ul," + "-li," + "br," + "img[class|src|alt=|width|height]," + "-h2,-h3,-h4," + "-pre," + "-code," + "-div", 'extended_valid_elements': "" + "a[name|class|href|target|title|onclick]," + "img[class|src|border=0|alt|title|hspace|vspace|width|height|align|onmouseover|onmouseout|name]," + "br[clearfix]," + "-p[class<clearfix?summary?code]," + "h2[class<clearfix],h3[class<clearfix],h4[class<clearfix]," + "ul[class<clearfix],ol[class<clearfix]," + "div[class]," } TINYMCE_FILEBROWSER = False TINYMCE_COMPRESSOR = False i've tried switching back to older versions of tinyMCE Javascript but nothing seems to work. Would appreciate some help

    Read the article

  • Retrieve 2 last posts for each category.

    - by Savageman
    Hello, Lets say I have 2 tables: blog_posts and categories. Each blog post belongs to only ONE category, so there is basically a foreign key between the 2 tables here. I would like to retrieve the 2 lasts posts from each category, is it possible to achieve this in a single request? GROUP BY would group everything and leave me with only one row in each category. But I want 2 of them. It would be easy to perform 1 + N query (N = number of category). First retrieve the categories. And then retrieve 2 posts from each category. I believe it would also be quite easy to perform M queries (M = number of posts I want from each category). First query selects the first post for each category (with a group by). Second query retrieves the second post for each category. etc. I'm just wondering if someone has a better solution for this. I don't really mind doing 1+N queries for that, but for curiosity and general SQL knowledge, it would be appreciated! Thanks in advance to whom can help me with this.

    Read the article

  • Integrating twitpic OAuth for iPhone.

    - by asadqamber
    How can I integrate twitpic API with OAuth for posting an image from iPhone? Any help or tutorial? Currently I am doing... NSURL *twitpicURL = [NSURL URLWithString:@"http://api.twitpic.com/2/upload.format"]; theRequest = [NSMutableURLRequest requestWithURL:twitpicURL]; [theRequest setHTTPMethod:@"POST"]; // Set the params NSString *message = theMessage; [theRequest addValue:@"http://api.twitter.com/" forHTTPHeaderField:@"OAuth realm"]; [theRequest addValue:TWITPIC_API_KEY forHTTPHeaderField:@"oauth_consumer_key"]; [theRequest addValue:@"HMAC-SHA1" forHTTPHeaderField:@"oauth_signature_method"]; [theRequest addValue:USER_OAUTH_TOKEN forHTTPHeaderField:@"oauth_token"]; [theRequest addValue:USER_OAUTH_SECRET forHTTPHeaderField:@"oauth_secret"]; [theRequest addValue: @"1272325550" forHTTPHeaderField:@"oauth_timestamp"]; [theRequest addValue:nil forHTTPHeaderField:@"oauth_nonce"]; [theRequest addValue:@"1.0" forHTTPHeaderField:@"oauth_version"]; [theRequest addValue:nil forHTTPHeaderField:@"oauth_signature"]; NSMutableData *postBody = [NSMutableData data]; [postBody appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"source\"\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithFormat:@"lighttable"] dataUsingEncoding:NSUTF8StringEncoding]]; // Message [postBody appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"message\"\r\n\r\n%@", message]dataUsingEncoding:NSUTF8StringEncoding]]; // Media [postBody appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"media\"; filename=\"%@\"\r\n", @"doc_twitpic_image.jpg"] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithFormat:@"Content-Type: image/jpg\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; // data as JPEG [postBody appendData:[[NSString stringWithFormat:@"Content-Transfer-Encoding: binary\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[NSData dataWithData:image]]; [theRequest setHTTPBody:postBody]; [theRequest setValue:[NSString stringWithFormat:@"%d", [postBody length]] forHTTPHeaderField:@"Content-Length"]; theConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self];

    Read the article

  • Can't get MySQL source query to work using Python mysqldb module

    - by Chris
    I have the following lines of code: sql = "source C:\\My Dropbox\\workspace\\projects\\hosted_inv\\create_site_db.sql" cursor.execute (sql) When I execute my program, I get the following error: Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'source C:\My Dropbox\workspace\projects\hosted_inv\create_site_db.sql' at line 1 Now I can copy and past the following into mysql as a query: source C:\\My Dropbox\\workspace\\projects\\hosted_inv\\create_site_db.sql And it works perfect. When I check the query log for the query executed by my script, it shows that my query was the following: source C:\\My Dropbox\\workspace\\projects\\hosted_inv\\create_site_db.sql However, when I manually paste it in and execute, the entire create_site_db.sql gets expanded in the query log and it shows all the sql queries in that file. Am I missing something here on how mysqldb does queries? Am I running into a limitation. My goal is to run a sql script to create the schema structure, but I don't want to have to call mysql in a shell process to source the sql file. Any thoughts? Thanks!

    Read the article

  • Practical size limitations for RDBMS

    - by grenade
    I am working on a project that must store very large datasets and associated reference data. I have never come across a project that required tables quite this large. I have proved that at least one development environment cannot cope at the database tier with the processing required by the complex queries against views that the application layer generates (views with multiple inner and outer joins, grouping, summing and averaging against tables with 90 million rows). The RDBMS that I have tested against is DB2 on AIX. The dev environment that failed was loaded with 1/20th of the volume that will be processed in production. I am assured that the production hardware is superior to the dev and staging hardware but I just don't believe that it will cope with the sheer volume of data and complexity of queries. Before the dev environment failed, it was taking in excess of 5 minutes to return a small dataset (several hundred rows) that was produced by a complex query (many joins, lots of grouping, summing and averaging) against the large tables. My gut feeling is that the db architecture must change so that the aggregations currently provided by the views are performed as part of an off-peak batch process. Now for my question. I am assured by people who claim to have experience of this sort of thing (which I do not) that my fears are unfounded. Are they? Can a modern RDBMS (SQL Server 2008, Oracle, DB2) cope with the volume and complexity I have described (given an appropriate amount of hardware) or are we in the realm of technologies like Google's BigTable? I'm hoping for answers from folks who have actually had to work with this sort of volume at a non-theoretical level.

    Read the article

  • ai: Determining what tests to run to get most useful data

    - by Sai Emrys
    This is for http://cssfingerprint.com I have a system (see about page on site for details) where: I need to output a ranked list, with confidences, of categories that match a particular feature vector the binary feature vectors are a list of site IDs & whether this session detected a hit feature vectors are, for a given categorization, somewhat noisy (sites will decay out of history, and people will visit sites they don't normally visit) categories are a large, non-closed set (user IDs) my total feature space is approximately 50 million items (URLs) for any given test, I can only query approx. 0.2% of that space I can only make the decision of what to query, based on results so far, ~10-30 times, and must do so in <~100ms (though I can take much longer to do post-processing, relevant aggregation, etc) getting the AI's probability ranking of categories based on results so far is mildly expensive; ideally the decision will depend mostly on a few cheap sql queries I have training data that can say authoritatively that any two feature vectors are the same category but not that they are different (people sometimes forget their codes and use new ones, thereby making a new user id) I need an algorithm to determine what features (sites) are most likely to have a high ROI to query (i.e. to better discriminate between plausible-so-far categories [users], and to increase certainty that it's any given one). This needs to take into balance exploitation (test based on prior test data) and exploration (test stuff that's not been tested enough to find out how it performs). There's another question that deals with a priori ranking; this one is specifically about a posteriori ranking based on results gathered so far. Right now, I have little enough data that I can just always test everything that anyone else has ever gotten a hit for, but eventually that won't be the case, at which point this problem will need to be solved. I imagine that this is a fairly standard problem in AI - having a cheap heuristic for what expensive queries to make - but it wasn't covered in my AI class, so I don't actually know whether there's a standard answer. So, relevant reading that's not too math-heavy would be helpful, as well as suggestions for particular algorithms. What's a good way to approach this problem?

    Read the article

  • Reporting System architecture for better performance

    - by pauloya
    Hi, We have a product that runs Sql Server Express 2005 and uses mainly ASP.NET. The database has around 200 tables, with a few (4 or 5) that can grow from 300 to 5000 rows per day and keep a history of 5 years, so they can grow to have 10 million rows. We have built a reporting platform, that allows customers to build reports based on templates, fields and filters. We face performance problems almost since the beginning, we try to keep reports display under 10 seconds but some of them go up to 25 seconds (specially on those customers with long history). We keep checking indexes and trying to improve the queries but we get the feeling that there's only so much we can do. Off course the fact that the queries are generated dynamically doesn't help with the optimization. We also added a few tables that keep redundant data, but then we have the added problem of maintaining this data up to date, and also Sql Express has a limit on the size of databases. We are now facing a point where we have to decide if we want to give up real time reports, or maybe cut the history to be able to have better performance. I would like to ask what is the recommended approach for this kind of systems. Also, should we start looking for third party tools/platforms? I know OLAP can be an option but can we make it work on Sql Server Express, or at least with a license that is cheap enough to distribute to thousands of deployments? Thanks

    Read the article

  • RESTful API design question - how should one allow users to create new resource instances?

    - by Tamás
    I'm working in a research group where we intend to publish implementations of some of the algorithms we develop on the web via a RESTful API. Most of these algorithms work on small to medium size datasets, and in many cases, a user of our services might want to run multiple queries (with different parameters) on the same dataset, so for me it seems reasonable to allow users to upload their datasets in advance and refer to them in their queries later. In this sense, a dataset could be a resource in my API, and an algorithm could be another. My question is: how should I let the users upload their own datasets? I cannot simply let users upload their data to /dataset/dataset_id as letting the users invent their own dataset_ids might result in ID collision and users overwriting each other's datasets by accident. (I believe one of the most frequently used dataset ID would be test). I think an ideal way would be to have a dedicated URL (like /dataset/upload) where users can POST their datasets and the response would contain a unique ID under which the dataset was stored, but I'm not sure that it does not violate the basic principles of REST. What is the preferred way of dealing with such scenarios?

    Read the article

  • Python: How to execute a SQL file or command

    - by Mestika
    Hi, I have this Python script: import sys import getopt import timeit import random import os import re import ibm_db import time from string import maketrans runs=5 queries=50 file = open("results.txt", "a") for r in range(5): print "Run %s\n" % r os.system("python reads.py -r1 -pquery1.sql -q50 -sespec") file.write('END QUERY READ 01') file.close() os.system("python query_read_02.py") Everything here is working, it is creating the results.txt file, it run the os.system("python reads.py...") file and that file is doing everything it's suppose to, but the problem comes when go and run the query_read_02.py file. In this file, it should execute a SQL command or a SQL file on my database, so I can create an index and see what the performance of that input is, but how do i do it? I create the connection to the database in the reads.py file, but it's hard to create the queries in there because I doesn't keep track of which file it has reached, it just execute commands from what the parameters are. I hope I've explained myself clear enough, otherwise please let me know. I just want to execute a SQL command or file which each query_read_0x.py file. Sincerely Mestika

    Read the article

  • NHibernate with nothing but stored procedures

    - by ChrisB2010
    I'd like to have NHibernate call a stored procedure when ISession.Get is called to fetch an entity by its key instead of using dynamic SQL. We have been using NHibernate and allowing it to generate our SQL for queries and inserts/updates/deletes, but now may have to deploy our application to an environment that requires us to use stored procedures for all database access. We can use sql-insert, sql-update, and sql-delete in our .hbm.xml mapping files for inserts/updates/deletes. Our hql and criteria queries will have to be replaced with stored procedure calls. However, I have not figured out how to force NHibernate to use a custom stored procedure to fetch an entity by its key. I still want to be able to call ISession.Get, as in: using (ISession session = MySessionFactory.OpenSession()) { return session.Get<Customer>(customerId); } and also lazy load objects, but I want NHibernate to call my "GetCustomerById" stored procedure instead of generating the dynamic SQL. Can this be done? Perhaps NHibernate is no longer a fit given this new environment we must support.

    Read the article

  • how to store JSON into POJO using Jackson

    - by user2963680
    I am developing a module where i am using rest service to get data. i am not getting how to store JSON using Jackson and store it which has Queryparam also. Any help is really appreciated as I am new to this.I am trying to do server side filtering in extjs infinte grid which is sending the below request to rest service. when the page load first time, it sends http://myhost/mycontext/rest/populateGrid?_dc=9999999999999&page=1&start=0&limit=500 when you select filter on name and place, it sends http://myhost/mycontext/rest/populateGrid?_dc=9999999999999&filter=[{"type":"string","value":"Tom","field":"name"},{"type":"string","value":"London","field":"Location"}]&page=1&start=0&limit=500 I am trying to save this in POJO and then sending this to database to retrieve data. For this on rest side i have written something like this @Provider @Path("/rest") public interface restAccessPoint { @GET @Path("/populateGrid") @Produces({MediaType.APPLICATION_JSON}) public Response getallGridData(FilterJsonToJava filterparam,@QueryParam("page") String page,@QueryParam("start") String start,@QueryParam("limit") String limit); } public class FilterJsonToJava { @JsonProperty(value ="filter") private List<Filter> data; .. getter and setter below } public class Filter { @JsonProperty("type") private String type; @JsonProperty("value") private String value; @JsonProperty("field") private String field; ...getter and setters below } I am getting the below error The following warnings have been detected with resource and/or provider classes: WARNING: A HTTP GET method, public abstract javax.ws.rs.core.Response com.xx.xx.xx.xxxxx (com.xx.xx.xx.xx.json.FilterJsonToJava ,java.lang.String,java.lang.String,java.lang.String), should not consume any entity. com.xx.xx.xx.xx.json.FilterJsonToJava, and Java type class com.xx.xx.xx.FilterJsonToJava, and MIME media type application/octet-stream was not found [11/6/13 17:46:54:065] 0000001c ContainerRequ E The registered message body readers compatible with the MIME media type are: application/octet-stream com.sun.jersey.core.impl.provider.entity.ByteArrayProvider com.sun.jersey.core.impl.provider.entity.FileProvider com.sun.jersey.core.impl.provider.entity.InputStreamProvider com.sun.jersey.core.impl.provider.entity.DataSourceProvider com.sun.jersey.core.impl.provider.entity.RenderedImageProvider */* -> com.sun.jersey.core.impl.provider.entity.FormProvider ...

    Read the article

  • Convert Lambda from C# to VB.NET

    - by Iosu
    How would I translate this C# lambda expression into VB.NET ? query.ExecuteAsync(op => op.Results.ForEach(Employees.Add)); using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using System.Collections.ObjectModel; using IdeaBlade.Core; using IdeaBlade.EntityModel; namespace SimpleSteps { public class MainPageViewModel { public MainPageViewModel() { Employees = new ObservableCollection(); var mgr = new NorthwindIBEntities(); var query = mgr.Employees; query.ExecuteAsync(op = op.Results.ForEach(Employees.Add)); } public ObservableCollection<Employee> Employees { get; private set; } } }

    Read the article

  • AS3: Array of objects parsed from XML remains with zero length

    - by Joel Alejandro
    I'm trying to parse an XML file and create an object array from its data. The data is converted to MediaAlbum classes (class of my own). The XML is parsed correctly and the object is loaded, but when I instantiate it, the Albums array is of zero length, even when the data is loaded correctly. I don't really know what the problem could be... any hints? import Nem.Media.*; var mg:MediaGallery = new MediaGallery("test.xml"); trace(mg.Albums.length); MediaGallery class: package Nem.Media { import flash.net.URLRequest; import flash.net.URLLoader; import flash.events.*; public class MediaGallery { private var jsonGalleryData:Object; public var Albums:Array = new Array(); public function MediaGallery(xmlSource:String) { var loader:URLLoader = new URLLoader(); loader.load(new URLRequest(xmlSource)); loader.addEventListener(Event.COMPLETE, loadGalleries); } public function loadGalleries(e:Event):void { var xmlData:XML = new XML(e.target.data); var album; for each (album in xmlData.albums) { Albums.push(new MediaAlbum(album.attribute("name"), album.photos)); } } } } XML test source: <gallery <albums <album name="Fútbol 9" <photos <photo width="600" height="400" filename="foto1.jpg" / <photo width="600" height="400" filename="foto2.jpg" / <photo width="600" height="400" filename="foto3.jpg" / </photos </album <album name="Fútbol 8" <photos <photo width="600" height="400" filename="foto4.jpg" / <photo width="600" height="400" filename="foto5.jpg" / <photo width="600" height="400" filename="foto6.jpg" / </photos </album </albums </gallery

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >