Search Results

Search found 16639 results on 666 pages for 'task engine'.

Page 595/666 | < Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >

  • C# Hook Forms / Windows / Dialogs etc. (via HWND?) to Capture Video Buffer (D3D Device?)

    - by Drax
    I am looking to create a very simple C# application which runs Full-Screen in Direct3D, and is able to grab the Desktop 'scene', mapping each Window from the Desktop to a Textured Polygon in my D3D Scene... I'm hoping to create a simplistic "3D Desktop" type of application as an experiment, and I'm wondering if there is a specific method for doing something like the following: 1)Get a list of all the Windows open on the Desktop (List of HWNDs?). 2)Grab the X,Y position of each Window, as well as the Width and Height. 3)Grab the Rendered image of each Window (magic happens here). 4)Create a new Texture/Surface in D3D using the Width and Height of the Window(s), and apply the Image we grabbed as a Texture. Is there an efficient 'best practice' for acquiring the actual image(s) being rendered to the Desktop? Is there also a 'best practice' for "extending the desktop" to a virtual second, third, etc. "desktop" and being able to swap between them, including creating a unique instance of the task-bar for each virtual desktop. Thanks a million for any suggestions!

    Read the article

  • Was Visual Studio 2008 or 2010 written to use multi cores?

    - by Erx_VB.NExT.Coder
    basically i want to know if the visual studio IDE and/or compiler in 2010 was written to make use of a multi core environment (i understand we can target multi core environments in 08 and 10, but that is not my question). i am trying to decide on if i should get a higher clock dual core or a lower clock quad core, as i want to try and figure out which processor will give me the absolute best possible experience with Visual Studio 2010 (ide and background compiler). if they are running the most important section (background compiler and other ide tasks) in one core, then the core will get cut off quicker if running a quad core, esp if background compiler is the heaviest task, i would imagine this would b e difficult to seperate in more then one process, so even if it uses multi cores you might still be better off with going for a higher clock cpu if the majority of the processing is still bound to occur in one core (ie the most significant part of the VS environment). i am a vb programmer, they've made great performance improvements in beta 2, congrats, but i would love to be able to use VS seamlessly... anyone have any ideas? thanks, erx

    Read the article

  • Rapid Opening and Closing System.IO.StreamWriter in C#

    - by ccomet
    Suppose you have a file that you are programmatically logging information into with regards to a process. Kinda like your typical debug Console.WriteLine, but due to the nature of the code you're testing, you don't have a console to write onto so you have to write it somewhere like a file. My current program uses System.IO.StreamWriter for this task. My question is about the approach to using the StreamWriter. Is it better to open just one StreamWriter instance, do all of the writes, and close it when the entire process is done? Or is it a better idea to open a new StreamWriter instance to write a line into the file, then immediately close it, and do this for every time something needs to be written in? In the latter approach, this would probably be facilitated by a method that would do just that for a given message, rather than bloating the main process code with excessive amounts of lines. But having a method to aid in that implementation doesn't necessarily make it the better choice. Are there significant advantages to picking one approach or the other? Or are they functionally equivalent, leaving the choice on the shoulders of the programmer?

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • VB.NET Update Access Database with DataTable

    - by sinDizzy
    I've been perusing some hep forums and some help books but cant seem to get my head wrapped around this. My task is to read data from two text files and then load that data into an existing MS Access 2007 database. So here is what i'm trying to do: Read data from first text file and for every line of data add data to a DataTable using CarID as my unique field. Read data from second text file and look for existing CarID in DataTable if exists update that row. If it doesnt exist add a new row. once im done push the contents of the DataTable to the database. What i have so far: Dim sSQL As String = "SELECT * FROM tblCars" Dim da As New OleDb.OleDbDataAdapter(sSQL, conn) Dim ds As New DataSet da.Fill(ds, "CarData") Dim cb As New OleDb.OleDbCommandBuilder(da) 'loop read a line of text and parse it out. gets dd, dc, and carId 'create a new empty row Dim dsNewRow As DataRow = ds.Tables("CarData").NewRow() 'update the new row with fresh data dsNewRow.Item("DriveDate") = dd dsNewRow.Item("DCode") = dc dsNewRow.Item("CarNum") = carID 'about 15 more fields 'add the filled row to the DataSet table ds.Tables("CarData").Rows.Add(dsNewRow) 'end loop 'update the database with the new rows da.Update(ds, "CarData") Questions: In constructing my table i use "SELECT * FROM tblCars" but what if that table has millions of records already. Is that not a waste of resources? Should i be trying something different if i want to update with new records? Once Im done with the first text file i then go to my next text file. Whats the best approach here: To First look for an existing record based on CarNum or to create a second table and then merge the two at the end? Finally when the DataTable is done being populated and im pushing it to the database i want to make sure that if records already exist with three primary fields (DriveDate, DCode, and CarNum) that they get updated with new fields and if it doesn't exist then those records get appended. Is that possible with my process? tia AGP

    Read the article

  • Is it safe to use a boolean flag to stop a thread from running in C#

    - by Lirik
    My main concern is with the boolean flag... is it safe to use it without any synchronization? I've read in several places that it's atomic. class MyTask { private ManualResetEvent startSignal; private CountDownLatch latch; private bool running; MyTask(CountDownLatch latch) { running = false; this.latch = latch; startSignal = new ManualResetEvent(false); } // A method which runs in a thread public void Run() { startSignal.WaitOne(); while(running) { startSignal.WaitOne(); //... some code } latch.Signal(); } public void Stop() { running = false; startSignal.Set(); } public void Start() { running = true; startSignal.Set(); } public void Pause() { startSignal.Reset(); } public void Resume() { startSignal.Set(); } } Is this a safe way to design a task? Any suggestions, improvements, comments? Note: I wrote my custom CountDownLatch class in case you're wondering where I'm getting it from.

    Read the article

  • How can I build value for "for-each" expression in XSLT with help of parameter

    - by Artic
    I need to navigate through this xml tree. <publication> <corporate> <contentItem> <metadata>meta</metadata> <content>html</content> </contentItem> <contentItem > <metadata>meta1</metadata> <content>html1</content> </contentItem> </corporate> <eurasia-and-africa> ... </eurasia-and-africa> <europe> ... </europe> </publication> and convert it to html with this stylesheet <ul> <xsl:variable name="itemsNode" select="concat('publicationManifest/',$group,'/contentItem')"></xsl:variable> <xsl:for-each select="$itemsNode"> <li> <xsl:value-of select="content"/> </li> </xsl:for-each> </ul> $group is a parameter with name of group for example "corporate". I have an error with compilation of this stylsheet. SystemID: D:\1\contentsTransform.xslt Engine name: Saxon6.5.5 Severity: error Description: The value is not a node-set Start location: 18:0 What the matter?

    Read the article

  • How to Bind a Command in WPF

    - by MegaMind
    Sometimes we used complex ways so many times, we forgot the simplest ways to do the task. I know how to do command binding, but i always use same approach. Create a class that implements ICommand interface and from the view model i create new instance of that class and binding works like a charm. This is the code that i used for command binding public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); DataContext = this; testCommand = new MeCommand(processor); } ICommand testCommand; public ICommand test { get { return testCommand; } } public void processor() { MessageBox.Show("hello world"); } } public class MeCommand : ICommand { public delegate void ExecuteMethod(); private ExecuteMethod meth; public MeCommand(ExecuteMethod exec) { meth = exec; } public bool CanExecute(object parameter) { return false; } public event EventHandler CanExecuteChanged; public void Execute(object parameter) { meth(); } } But i want to know the basic way to do this, no third party dll no new class creation. Do this simple command binding using a single class. Actual class implements from ICommand interface and do the work.

    Read the article

  • Automatically Organize Tags in Tax/Folksonomy

    - by Rob Wilkerson
    I'm working on a process that will perform natural language processing (NLP) on one--and potentially several--of our content rich sites. What I'd like to do once the NLP is complete is to automatically organize the output (generally a set of terms that you might think of as tags given the prevalence of that metaphor) into some kind of standard or generally accepted organizational structure. In a perfect world, I'd really like this to be crowd sourced under the folksonomy concept (as opposed to a taxonomy) since the ultimate goal is to target/appeal to real people rather than "domain experts", but I'm open to ideas and best practices. For the obvious purpose of scalability, I'd like to automate the population of this tax/folksonomy so that "some guy" in the team/organization isn't responsible for looking at a bunch of words (with or without context) and arbitrarily fleshing out the contextual components of the tree. I have a few ideas for doing this that require some research to establish viability, but I have exactly zero practical experience with this sort of thing so the ideas really just boil down to stuff I made up that might perform some role in accomplishing the task. Imagining that others have vastly more experience with this sort of thing, I'm hoping that I can stand on your shoulders. Thanks for your thoughts and insights.

    Read the article

  • Query crashes MS Access

    - by user284651
    THE TASK: I am in the process of migrating a DB from MS Access to Maximizer. In order to do this I must take 64 tables in MS ACCESS and merge them into one. The output must be in the form of a TAB or CSV file. Which will then be imported into Maximizer. THE PROBLEM: Access is unable to perform a query that is so complex it seems, as it crashes any time I run the query. ALTERNATIVES: I have thought about a few alternatives, and would like to do the least time-consuming one, out of these, while also taking advantage of any opportunities to learn something new. Export each table into CSVs and import into SQLight and then make a query with it to do the same as what ACCESS fails to do (merge 64 tables). Export each table into CSVs and write a script to access each one and merge the CSVs into a single CSV. Somehow connect to the MS ACCESS DB (API), and write a script to pull data from each table and merge them into a CSV file. QUESTION: What do you recommend?

    Read the article

  • How to detect a Socket disconnection?

    - by AngryHacker
    I've implemented a task using the async Sockets pattern in Silverlight 3. I started with Michael Schwarz's implementation and built on top of that. So basically, my Silverlight app establishes a persistent socket connection to a device and then data flows both ways as necessary between the device and the Silverlight app. One thing I am struggling with is how to detect disconnection. I could think of 2 approaches: Keep-Alive. I know this can be done at the Sockets level, but I am not sure how to do this in an async model. How would the Socket class let me know there has been a disconnection. Manual keep alive. Basically, I am having the Silverlight app send a dummy packet every 20 seconds or so. If it fails, I'd assume disconnection. However, incredibly, SocketAsyncEventArgs.SocketError always reports success, even if I simply unplug the device that the Silverlight app is connected to. I am not sure whether this is a bug or what or perhaps I need to upgrade to SL4. Any ideas, direction or implementation would be appreciated.

    Read the article

  • CakePHP dropping session between pages

    - by DavidYell
    Hi, I have an application with multiple regions and various incoming links. The premise, well it worked before, is that in the app_controller, I break out these incoming links and set them in the session. So I have a huge beforeFilter() in my *app_controller* which catches these and sets two variables in the session. Viewing.region and Search.engine, no problem. The problem arises that the session does not seem to be persistant across page requests. So for example, going to /reviews/write (userReviews/add) should have a session available which was set when the user arrived at the site. Although it seems to have vanished! It would appear that unless $this-params is caught explicitly in the *app_controller* and a session variable written, it does not exist on other pages. So far I have tried, swapping between storing session in 'cake' and 'php' both seem to exhibit the same behaviour. I use 'php' as a default. My Session.timeout is '120', Session.checkAgent is False and Security.level is 'low'. All of which should give enough leniency to the framework to allow sessions the most room to live! I'm a bit stumped as to why the session seems to be either recreated or blanked when a new page is being requested. I have commented out the requestAction() calls to make sure that isn't confusing the session request object also, which doesn't seem to make a difference. Any help would be great, as I don't have to have to recode the site to pass all the various variables via parameters in the url, as that would suck, and it's worked before, thus switching on $this-Session-read('Viewing.region') in all my code!

    Read the article

  • SSIS Expressions - EvaluateAsExpression Problem

    - by Randy Minder
    In a Data Flow, I have an Derived Column task. In the expression for one of the columns, I have the following expression: [siteid] == "100" ? "1101" : [siteid] == "110" ? "1001" : [siteid] == "120" ? "2101" : [siteid] == "140" ? "1102" : [siteid] == "210" ? "2001" : [siteid] == "310" ? "3001" : [siteid] This works just fine. However, I intend to reuse this in at least a dozen other places so I want to store this to a variable and use the variable in the Derived Column instead of the hard-coded expression. When I attempt to create a variable, using the expression above, I get a syntax error saying 'siteid' is not defined. I guess this makes sense because it isn't. But how can I get this the expression to work by using a variable? It seems like I need some sort of way to tell it that 'siteid' will be the column containing the data I want to apply the expression to.

    Read the article

  • C++, Ifstream opens local file but not file on HTTP Server

    - by fammi
    Hi, I am using ifstream to open a file and then read from it. My program works fine when i give location of the local file on my system. for eg /root/Desktop/abc.xxx works fine But once the location is on the http server the file fails to open. for eg http://192.168.0.10/abc.xxx fails to open. Is there any alternate for ifstream when using a URL address? thanks. part of the code where having problem: bool readTillEof = (endIndex == -1) ? true : false; // Open the file in binary mode and seek to the end to determine file size ifstream file ( fileName.c_str ( ), ios::in|ios::ate|ios::binary ); if ( file.is_open ( ) ) { long size = (long) file.tellg ( ); long numBytesRead; if ( readTillEof ) { numBytesRead = size - startIndex; } else { numBytesRead = endIndex - startIndex + 1; } // Allocate a new buffer ptr to read in the file data BufferSptr buf (new Buffer ( numBytesRead ) ); mpStreamingClientEngine->SetResponseBuffer ( nextRequest, buf ); // Seek to the start index of the byte range // and read the data file.seekg ( startIndex, ios::beg ); file.read ( (char *)buf->GetData(), numBytesRead ); // Pass on the data to the SCE // and signal completion of request mpStreamingClientEngine->HandleDataReceived( nextRequest, numBytesRead); mpStreamingClientEngine->MarkRequestCompleted( nextRequest ); // Close the file file.close ( ); } else { // Report error to the Streaming Client Engine // as unable to open file AHS_ERROR ( ConnectionManager, " Error while opening file \"%s\"\n", fileName.c_str ( ) ); mpStreamingClientEngine->HandleRequestFailed( nextRequest, CONNECTION_FAILED ); } }

    Read the article

  • Yeoman 'grunt test' fails on clean project with 'port already in use'

    - by XMLilley
    With: Mac OS 10.8.4 Node 0.10.12 npm 1.3.1 grunt-cli 0.1.9 yo 1.0.0-rc.1 bower 0.9.2 [email protected] I encounter the following error with a clean yo angular project, followed by grunt server then grunt test: Running "connect:test" (connect) task Fatal error: Port 9000 is already in use by another process. I'm new to Yeoman and am stumped. I've deleted my original project and created a new one in a fresh folder just to make sure I wasn't overlooking any invisible configs. I restarted the machine to make sure I wasn't running any temporary server processes I had forgotten about. After all attempts, the basic server starts fine, attaches to Chrome, and the watcher updates the browser on any changes. (Notably, the server is running on 9000, which seems odd for the test-runner to also be trying to use 9000.) But I get that same error on attempting to start the test runner. Is this something I can fix, or an issue I should report to the Yeoman team? Thanks.

    Read the article

  • Can't store UTF-8 in RDS despite setting up new Parameter Group using Rails on Heroku

    - by Lail
    I'm setting up a new instance of a Rails(2.3.5) app on Heroku using Amazon RDS as the database. I'd like to use UTF-8 for everything. Since RDS isn't UTF-8 by default, I set up a new Parameter Group and switched the database to use that one, basically per this. Seems to have worked: SHOW VARIABLES LIKE '%character%'; character_set_client utf8 character_set_connection utf8 character_set_database utf8 character_set_filesystem binary character_set_results utf8 character_set_server utf8 character_set_system utf8 character_sets_dir /rdsdbbin/mysql-5.1.50.R3/share/mysql/charsets/ Furthermore, I've successfully setup Heroku to use the RDS database. After rake db:migrate, everything looks good: CREATE TABLE `comments` ( `id` int(11) NOT NULL AUTO_INCREMENT, `commentable_id` int(11) DEFAULT NULL, `parent_id` int(11) DEFAULT NULL, `content` text COLLATE utf8_unicode_ci, `child_count` int(11) DEFAULT '0', `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`), KEY `commentable_id` (`commentable_id`), KEY `index_comments_on_community_id` (`community_id`), KEY `parent_id` (`parent_id`) ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; In the markup, I've included: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Also, I've set: production: encoding: utf8 collation: utf8_general_ci ...in the database.yml, though I'm not very confident that anything is being done to honor any of those settings in this case, as Heroku seems to be doing its own config when connecting to RDS. Now, I enter a comment through the form in the app: "Úbe® ƒåiL", but in the database I've got "Úbe® Æ’Ã¥iL" It looks fine when Rails loads it back out of the database and it is rendered to the page, so whatever it is doing one way, it's undoing the other way. If I look at the RDS database in Sequel Pro, it looks fine if I set the encoding to "UTF-8 Unicode via Latin 1". So it seems Latin-1 is sneaking in there somewhere. Somebody must have done this before, right? What am I missing?

    Read the article

  • Attempt to create nested for loops generating missing arguments error

    - by JerryK
    Am attempting to teach myself to program using Tcl. (I want to become more familiar with the language to understand someone else's code - SCID chess) The task i've set myself to motivate my learing of Tcl is to solve the 8 queens problem. My approach to creating a program is to sucessively 'prototype' a solution. So. I'm up to nesting a for loop holding the q pos on row 2 inside the for loop holding the q pos on row 1 Here is my code set allowd 1 set notallowd 0 for {set r1p 1} {$r1p <= 8} {incr r1p } { puts "1st row q placed at $r1p" ;# re-initialize r2 'free for q placemnt' array after every change of r1 q pos: for {set i 1 } {$i <= 8} {incr i} { set r2($i) $allowd } for { set r2($r1p) $notallowd ; set r2([eval $r1p-1]) $notallowd ; set r2([eval $r1p+1]) $notallowd ; set r2p 1} {$r2p <= 8} { incr r2p ;# end of 'next' arg of r2 forloop } ;# commnd arg of r2 forloop placed below: {puts "2nd row q placed at $r2p" } } My problem is that when i run the code the interpreter is aborting with the fatal error: "wrong #args should be for start test next command. I've gone over my code a few times and can't see that i've missed any of the for loop arguments.

    Read the article

  • Error in django using Apache & mod_wsgi

    - by Ignacio
    Hey, I've been doing some changes to my django develpment env, as some of you suggested. So far I've managed to configure and run it successfully with postgres. Now I'm trying to run the app using apache2 and mod_wsgi, but I ran into this little problem after I followed the guidelines from the django docs. When I access localhost/myapp/tasks this error raises: Request Method: GET Request URL: http://localhost/myapp/tasks/ Exception Type: TemplateSyntaxError Exception Value: Caught an exception while rendering: argument 1 must be a string or unicode object Original Traceback (most recent call last): File "/usr/local/lib/python2.6/dist-packages/django/template/debug.py", line 71, in render_node result = node.render(context) File "/usr/local/lib/python2.6/dist-packages/django/template/defaulttags.py", line 126, in render len_values = len(values) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 81, in __len__ self._result_cache = list(self.iterator()) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 238, in iterator for row in self.query.results_iter(): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 287, in results_iter for rows in self.execute_sql(MULTI): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 2369, in execute_sql cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) TypeError: argument 1 must be a string or unicode object ... ... ... And then it highlights a {% for t in tasks %} template tag, like the source of the problem is there, but it worked fine on the built-in server. The view associated with that page is really simple, just fetch all Task objects. And the template just displays them on a table. Also, some pages get rendered ok. Don't want to fill this Question with code, so if you need some more info I'd be glad to provide it. Thanks

    Read the article

  • Are workflows good for web service business logic?

    - by JL
    I have a series of complex web services that are getting used in my SOA application. I am generally happy with the overall design of the application, but as the complexity grows, I was wondering if Windows Workflow might be the way to go. My motivations for this are that you can get a graphic representation of the applications functionality, so it would be easier to maintain the code by its business function, rather than what I have now ( a standard 3 tier class library structure). My concerns are: I would be inducing an abstraction in my code, and I don't want to spend time having to deal with possible WF quirks or bugs. I've never worked with WF, is it a solid technology? I don't want to hit any WF limitations that prevent me from developing my solution. Is a WF even the right solution for the task? Simply put I am considering writing my next web service in this app to call a WF, and in this work flow manage the tasks the web service needs to carry out. I think it will be much neater and easier to maintain than a regular c# class library (maintainable by namespaces, classes ). Do you think this is the right thing to do? I'm hoping for positive feedback on WF (.net 4), but brutal honestly at the end of the day would help more. Thanks

    Read the article

  • Response time increasing (worsening) over time with consistent load

    - by NJ
    Ok. I know I don't have a lot of information. That is, essentially, the reason for my question. I am building a game using Flash/Flex and Rails on the back-end. Communication between the two is via WebORB. Here is what is happening. When I start the client an operation calls the server every 60 seconds (not much, right?) which results in two database SELECTS and an UPDATE and a resulting response to the client. This repeats every 60 seconds. I deployed a test version on heroku and NewRelic's RPM told me that response time degraded over time. One client with one task every 60 seconds. Over several hours the response time drifted from 150ms to over 900ms in response time. I have been able to reproduce this in my development environment (my Macbook Pro) so it isn't a problem on Heroku's side. I am not doing anything sophisticated (by design) in the server app. An action gets called, gets some data from the database, performs an AR update and then returns a response. No caching, etc. Any thoughts? Anyone? I'd really appreciate it.

    Read the article

  • Asp.Net MVC3 - How create Dynamic DropDownList

    - by Bibo
    I found many articles on this but still I don´t know how exactly to do this. I am trying to create my own blog engine, I have View for create article (I am using EF and Code first) and now I must fill number of category in which article should be add but I want to change it to dropdownlist with names of categories. My model looks this: public class Article { public int ArticleID { get; set; } [Required] public string Title { get; set; } [Required] public int CategoryID { get; set; } public DateTime Date { get; set; } [Required()] [DataType(DataType.MultilineText)] [AllowHtml] public string Text { get; set; } public virtual Category Category { get; set; } public IEnumerable<SelectListItem> Categories { get; set; } public virtual ICollection<Comment> Comments { get; set; } } public class Category { public int CategoryID { get; set; } [Required] public string Name { get; set; } public virtual ICollection<Article> Articles { get; set; } } I know I must use Enum (or I think) but I am not exactly sure how. I don´t know which tutorial from that I found is best for me.

    Read the article

  • Best ways to construct Dynamic Search Conditions for Sql

    - by CoolBeans
    I have always wondered what's the best way to achieve this task. In most web based applications you have to provide search options on many different criteria. Based on what criteria is chosen behind the scene you modify your SQL. Generally, this is how I tend to go about it:- Have a base SQL template. In the base template have conditions like this WHERE [#PRE_COND1] AND [#PRE_COND2] .. so on and so forth. So an example SQL might look something like SELECT NAME,AGE FROM PERSONS [,#TABLE2] [,#TABLE3] WHERE [#PRE_COND1] AND [#PRE_COND2] ORDER BY [#ORD_COND1] AND [#ORD_COND2] etc. During run time after figuring out the all the search criteria user has entered, I replace the [#PRE_COND1]s and [#ORD_COND1]s with the appropriate SQLs and then execute the query. I personally do not like this brute force method. However, I never came across a better approach either. How do you accomplish such tasks generally given you are either using native JDBC or Spring JDBC? It is almost like I need a C MACRO like functionality in Java to do this.

    Read the article

  • MySQL PHP | "SELECT FROM table" using "alphanumeric"-UUID. Speed vs. Indexed Integer / Indexed Char

    - by dropson
    At the moment, I select rows from 'table01' using: SELECT * FROM table01 WHERE UUID = 'whatever'; The UUID column is a unique index. I know this isn't the fastest way to select data from the database, but the UUID is the only row-identifier that is available to the front-end. Since I have to select by UUID, and not ID, I need to know what of these two options I should go for, if say the table consists of 100'000 rows. What speed differences would I look at, and would the index for the UUID grow to large, and lag the DB? Get the ID before doing the "big" select 1. $id = "SELECT ID FROM table01 WHERE UUID = '{alphanumeric character}'"; 2. $rows = SELECT * FROM table01 WHERE ID = $id; Or keep it the way it is now, using the UUID. 1. SELECT FROM table01 WHERE UUID '{alphanumeric character}'; Side note: All new rows are created by checking if the system generated uniqueid exists before trying to insert a new row. Keeping the column always unique. The "example" table. CREATE TABLE Table01 ( ID int NOT NULL PRIMARY KEY, UUID char(15), name varchar(100), url varchar(255), `date` datetime ) ENGINE = InnoDB; CREATE UNIQUE INDEX UUID ON Table01 (UUID);

    Read the article

  • Is there a Designer for MFC in Visual Studio like for windows forms in .NET?

    - by claws
    I'm a .NET programmer. I've never developed anything in MFC. Currently I had to write a C++ application (console) for some image processing task. I finished writing it. But the point is I need to design GUI also for this. Well, there won't be anything complex. Just a window with few Buttons, RadioButtons, Check Boxes, PicturesBox & few sliders. thats it. I'm using VS 2008 and was expecting a .NET style form designer. Just to test, I created a MFC project (with all default configuration) and these files were created by default: ChildFrm.cpp MainFrm.cpp mfc.cpp mfcDoc.cpp mfcView.cpp stdafx.cpp Now, I'm unable to find a Designer. There is no View Designer. I've opened all the above *.cpp and in the code editor right clicked to see "Designer View". ToolBox is just empty because I'm in code editor mode. When I built the project. This is the window I get. How to open a designer?

    Read the article

  • Slow query. Wrong database structure?

    - by Tin
    I have a database with table that contains tasks. Tasks have a lifecycle. The status of the task's lifecycle can change. These state transitions are stored in a separate table tasktransitions. Now I wrote a query to find all open/reopened tasks and recently changed tasks but I already see with a rather small number of tasks (<1000) that execution time has becoming very long (0.5s). Tasks +-------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+---------+------+-----+---------+----------------+ | taskid | int(11) | NO | PRI | NULL | auto_increment | | description | text | NO | | NULL | | +-------------+---------+------+-----+---------+----------------+ Tasktransitions +------------------+-----------+------+-----+-------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+-----------+------+-----+-------------------+----------------+ | tasktransitionid | int(11) | NO | PRI | NULL | auto_increment | | taskid | int(11) | NO | MUL | NULL | | | status | int(11) | NO | MUL | NULL | | | description | text | NO | | NULL | | | userid | int(11) | NO | | NULL | | | transitiondate | timestamp | NO | | CURRENT_TIMESTAMP | | +------------------+-----------+------+-----+-------------------+----------------+ Query SELECT tasks.taskid,tasks.description,tasklaststatus.status FROM tasks LEFT OUTER JOIN ( SELECT tasktransitions.taskid,tasktransitions.transitiondate,tasktransitions.status FROM tasktransitions INNER JOIN ( SELECT taskid,MAX(transitiondate) AS lasttransitiondate FROM tasktransitions GROUP BY taskid ) AS tasklasttransition ON tasklasttransition.lasttransitiondate=tasktransitions.transitiondate AND tasklasttransition.taskid=tasktransitions.taskid ) AS tasklaststatus ON tasklaststatus.taskid=tasks.taskid WHERE tasklaststatus.status IS NULL OR tasklaststatus.status=0 or tasklaststatus.transitiondate>'2013-09-01'; I'm wondering if the database structure is best choice performance wise. Could adding indexes help? I already tried to add some but I don't see great improvements. +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | tasktransitions | 0 | PRIMARY | 1 | tasktransitionid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 1 | taskid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 2 | transitiondate | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | status_ix | 1 | status | A | 3 | NULL | NULL | | BTREE | | | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ Any other suggestions?

    Read the article

< Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >