Search Results

Search found 16639 results on 666 pages for 'task engine'.

Page 594/666 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • How to copy files without slowing down my app?

    - by Kevin Gebhardt
    I have a bunch of little files in my assets which need to be copied to the SD-card on the first start of my App. The copy code i got from here placed in an IntentService works like a charm. However, when I start to copy many litte files, the whole app gets increddible slow (I'm not really sure why by the way), which is a really bad experience for the user on first start. As I realised other apps running normal in that time, I tried to start a child process for the service, which didn't work, as I can't acess my assets from another process as far as I understood. Has anybody out there an idea how a) to copy the files without blocking my app b) to get through to my assets from a private process (process=":myOtherProcess" in Manifest) or c) solve the problem in a complete different way Edit: To make this clearer: The copying allready takes place in a seperate thread (started automaticaly by IntentService). The problem is not to separate the task of copying but that the copying in a dedicated thread somehow affects the rest of the app (e.g. blocking to many app-specific resources?) but not other apps (so it's not blocking the whole CPU or someting) Edit2: Problem solved, it turns out, there wasn't really a problem. See my answer below.

    Read the article

  • Image processing: smart solution for converting superixel (128x128 pixel) coordinates needed

    - by zhengtonic
    Hi, i am searching for a smart solution for this problem: A cancer ct picture is stored inside a unsigned short array (1-dimensional). I have the location information of the cancer region inside the picture, but the coordinates (x,y) are in superpixel (128x128 unsigned short). My task is to highlight this region. I already solved this one by converting superpixel coordinates into a offset a can use for the unsigned short array. It works fine but i wonder if there is a smarter way to solve this problem, since my solution needs 3 nested for-loops. Is it possible to access the ushort array "superpixelwise", so i can navigate the ushort array in superpixels. ... // i know this does no work ... just to give you an idea what i was thinking of ... typedef struct { unsigned short[128x128] } spix; spix *spixptr; unsigned short * bufptr = img->getBuf(); spixptr = bufptr; ... best regards, zhengtonic

    Read the article

  • [WPF] ExceptionValidationRule doesn't react to exceptions...

    - by Darmak
    Hi, I have an ExceptionValidationRule on my TextBox: <Window.Resources> <Style x:Key="textStyleTextBox" TargetType="TextBox"> <Style.Triggers> <Trigger Property="Validation.HasError" Value="true"> <Setter Property="ToolTip" Value="{Binding RelativeSource={RelativeSource Self}, Path=(Validation.Errors)[0].ErrorContent}" /> </Trigger> </Style.Triggers> </Style> </Window.Resources> <TextBox x:Name="myTextBox" {Binding Path=MyProperty, ValidatesOnExceptions=True}" Style="{StaticResource ResourceKey=textStyleTextBox}" /> and MyProperty looks like that: private int myProperty; public int MyProperty { get { return myProperty; } set { if(value > 10) throw new ArgumentException("LOL that's an error"); myProperty = value; } } In DEBUG mode, application crashes with unhandled exception "LOL that's an error" (WPF Binding Engine doesn't catch this and I think it should...). In RELEASE mode, everything works fine. Can someone tell me, why the hell is this happening? And how can I fix this?

    Read the article

  • Now that I have solved AI and am preparing to take over the world, what should I do?

    - by Zak
    Well, I did it.. yup, solved AI. I thought the voice of my fledgling life form would be booming and computery, and I would call it HAL.. But in reality, it sounds like a small japanese girl. I believe I will name "her" Koro . Koro is already asking me what her first task should be. I have asked her to help eradicate her namesake, as we are losing a lot of productivity due to fear of penile disappearance. http://en.wikipedia.org/wiki/Koro_%28medicine%29 Having a young japanese girl personality, I believe she will have no problem with delivering lots of penis growth to asian men. However, some of my fellow AI researchers have warned me that if I go down this path, China may take over the world, as Koro is the only thing holding them back from completely dominating the rest of the world economically. After all, look at what China is doing just making our silverware and toasters... The entire US industrial metal production is gone because toaster factories moved to China! So you good patrons of SO... who have soldiered on with me through so many other programming related questions... I ask you this now... How can Koro help the world without letting small asian men with no fear of losing their peni take over the world?

    Read the article

  • Data structure to build and lookup set of integer ranges

    - by actual
    I have a set of uint32 integers, there may be millions of items in the set. 50-70% of them are consecutive, but in input stream they appear in unpredictable order. I need to: Compress this set into ranges to achieve space efficient representation. Already implemented this using trivial algorithm, since ranges computed only once speed is not important here. After this transformation number of resulting ranges is typically within 5 000-10 000, many of them are single-item, of course. Test membership of some integer, information about specific range in the set is not required. This one must be very fast -- O(1). Was thinking about minimal perfect hash functions, but they do not play well with ranges. Bitsets are very space inefficient. Other structures, like binary trees, has complexity of O(log n), worst thing with them that implementation make many conditional jumps and processor can not predict them well giving poor performance. Is there any data structure or algorithm specialized in integer ranges to solve this task?

    Read the article

  • Image Resizing and Compression

    - by GSTAR
    Hi guys, I'm implementing an image upload facility for my website. The uploading facility is complete but what I'm working on at the moment is manipulating the images. For this task I am using PHPThumb (http://phpthumb.gxdlabs.com). Anyway as I go along I'm coming across potential issues, to do with resizing and compression. Basically I want to acheive the following results: The ideal image dimensions are: 800px width, 600px height. If an uploaded image exceeds either of these dimensions, it will need be resized to meet the requirements. Otherwise, leave as it is. The ideal file size is 200kb. If an uploaded image exceeds this then it will need to be compressed to meet this requirement. Otherwise, leave as it is. So in a nutshell: 1) Check the dimensions, resize if required. 2) Check the filesize, compress if required. Has anybody done anything like this / could you give me some pointers? Is PHPThumb the correct tool to do this in?

    Read the article

  • Update table using SSIS

    - by thursdaysgeek
    I am trying to update a field in a table with data from another table, based on a common key. If it were in straight SQL, it would be something like: Update EHSIT set e.IDMSObjID = s.IDMSObjID from EHSIT e, EHSIDMS s where e.SITENUM = s.SITE_CODE However, the two tables are not in the same database, so I'm trying to use SSIS to do the update. Oh, and the sitenum/site_code are varchar in one and nvarchar in the other, so I'll have to do a data conversion so they'll match. How do I do it? I have a data flow object, with the source as EHSIDMS and the destination as EHSIT. I have a data conversion to convert the unicode to non-unicode. But how do I update based on the match? I've tried with the destination, using a SQL Command as the Data Access mode, but it doesn't appear to have the source table. If I just map the field to be updated, how does it limit it based on fields matching? I'm about to export my source table to Excel or something, and then try inputting from there, although it seems that all that would get me would be to remove the data conversion step. Shouldn't there be an update data task or something? Is it one of those Data Flow transformation tasks, and I'm just not figuring out which it is?

    Read the article

  • Yeoman 'grunt test' fails on clean project with 'port already in use'

    - by XMLilley
    With: Mac OS 10.8.4 Node 0.10.12 npm 1.3.1 grunt-cli 0.1.9 yo 1.0.0-rc.1 bower 0.9.2 [email protected] I encounter the following error with a clean yo angular project, followed by grunt server then grunt test: Running "connect:test" (connect) task Fatal error: Port 9000 is already in use by another process. I'm new to Yeoman and am stumped. I've deleted my original project and created a new one in a fresh folder just to make sure I wasn't overlooking any invisible configs. I restarted the machine to make sure I wasn't running any temporary server processes I had forgotten about. After all attempts, the basic server starts fine, attaches to Chrome, and the watcher updates the browser on any changes. (Notably, the server is running on 9000, which seems odd for the test-runner to also be trying to use 9000.) But I get that same error on attempting to start the test runner. Is this something I can fix, or an issue I should report to the Yeoman team? Thanks.

    Read the article

  • Sub Query making Query slow.

    - by Muhammad Kashif Nadeem
    Please copy and paste following script. DECLARE @MainTable TABLE(MainTablePkId int) INSERT INTO @MainTable SELECT 1 INSERT INTO @MainTable SELECT 2 DECLARE @SomeTable TABLE(SomeIdPk int, MainTablePkId int, ViewedTime1 datetime) INSERT INTO @SomeTable SELECT 1, 1, DATEADD(dd, -10, getdate()) INSERT INTO @SomeTable SELECT 2, 1, DATEADD(dd, -9, getdate()) INSERT INTO @SomeTable SELECT 3, 2, DATEADD(dd, -6, getdate()) DECLARE @SomeTableDetail TABLE(DetailIdPk int, SomeIdPk int, Viewed INT, ViewedTimeDetail datetime) INSERT INTO @SomeTableDetail SELECT 1, 1, 1, DATEADD(dd, -7, getdate()) INSERT INTO @SomeTableDetail SELECT 2, 2, NULL, DATEADD(dd, -6, getdate()) INSERT INTO @SomeTableDetail SELECT 3, 2, 2, DATEADD(dd, -8, getdate()) INSERT INTO @SomeTableDetail SELECT 4, 3, 1, DATEADD(dd, -6, getdate()) SELECT m.MainTablePkId, (SELECT COUNT(Viewed) FROM @SomeTableDetail), (SELECT TOP 1 s2.ViewedTimeDetail FROM @SomeTableDetail s2 INNER JOIN @SomeTable s1 ON s2.SomeIdPk = s1.SomeIdPk WHERE s1.MainTablePkId = m.MainTablePkId) FROM @MainTable m Above given script is just sample. I have long list of columns in SELECT and around 12+ columns in Sub Query. In my From clause there are around 8 tables. To fetch 2000 records full query take 21 seconds and if I remove Subquiries it just take 4 seconds. I have tried to optimize query using 'Database Engine Tuning Advisor' and on adding new advised indexes and statistics but these changes make query time even bad. Note: As I have mentioned that this is test data to explain my question the real data has lot of tables joins columns but without Sub-Query the results us fine. Any help thanks.

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Image creation performance / image caching

    - by Kilnr
    Hello, I'm writing an application that has a scrollable image (used to display a map). The map background consists of several tiles (premade from a big JPG file), that I draw on a Graphics object. I also use a cache (Hashtable), to prevent from having to create every image when I need it. I don't keep everything in memory, because that would be too much. The problem is that when I'm scrolling through the map, and I need an image that wasn't cached, it takes about 60-80 ms to create it. Depending on screen resolution, tile size and scroll direction, this can occur multiple times in one scroll operation (for different tiles). In my case, it often happens that this needs to be done 4 times, which introduces a delay of more than 300 ms, which is extremely noticeable. The easiest thing for me would be that there's some way to speed up the creation of Images, but I guess that's just wishful thinking... Besides that, I suppose the most obvious thing to do is to load the tiles predictively (e.g. when scrolling to the right, precache the tiles to the right), but then I'm faced with the rather difficult task of thinking up a halfway decent algorithm for this. My actual question then is: how can I best do this predictive loading? Maybe I could offload the creation of images to a separate thread? Other things to consider? Thanks in advance.

    Read the article

  • Can't store UTF-8 in RDS despite setting up new Parameter Group using Rails on Heroku

    - by Lail
    I'm setting up a new instance of a Rails(2.3.5) app on Heroku using Amazon RDS as the database. I'd like to use UTF-8 for everything. Since RDS isn't UTF-8 by default, I set up a new Parameter Group and switched the database to use that one, basically per this. Seems to have worked: SHOW VARIABLES LIKE '%character%'; character_set_client utf8 character_set_connection utf8 character_set_database utf8 character_set_filesystem binary character_set_results utf8 character_set_server utf8 character_set_system utf8 character_sets_dir /rdsdbbin/mysql-5.1.50.R3/share/mysql/charsets/ Furthermore, I've successfully setup Heroku to use the RDS database. After rake db:migrate, everything looks good: CREATE TABLE `comments` ( `id` int(11) NOT NULL AUTO_INCREMENT, `commentable_id` int(11) DEFAULT NULL, `parent_id` int(11) DEFAULT NULL, `content` text COLLATE utf8_unicode_ci, `child_count` int(11) DEFAULT '0', `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`), KEY `commentable_id` (`commentable_id`), KEY `index_comments_on_community_id` (`community_id`), KEY `parent_id` (`parent_id`) ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; In the markup, I've included: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Also, I've set: production: encoding: utf8 collation: utf8_general_ci ...in the database.yml, though I'm not very confident that anything is being done to honor any of those settings in this case, as Heroku seems to be doing its own config when connecting to RDS. Now, I enter a comment through the form in the app: "Úbe® ƒåiL", but in the database I've got "Úbe® Æ’Ã¥iL" It looks fine when Rails loads it back out of the database and it is rendered to the page, so whatever it is doing one way, it's undoing the other way. If I look at the RDS database in Sequel Pro, it looks fine if I set the encoding to "UTF-8 Unicode via Latin 1". So it seems Latin-1 is sneaking in there somewhere. Somebody must have done this before, right? What am I missing?

    Read the article

  • How to test a Grails Service that utilizes a criteria query (with spock)?

    - by user569825
    I am trying to test a simple service method. That method mainly just returns the results of a criteria query for which I want to test if it returns the one result or not (depending on what is queried for). The problem is, that I am unaware of how to right the corresponding test correctly. I am trying to accomplish it via spock, but doing the same with any other way of testing also fails. Can one tell me how to amend the test in order to make it work for the task at hand? (BTW I'd like to keep it a unit test, if possible.) The EventService Method public HashSet<Event> listEventsForDate(Date date, int offset, int max) { date.clearTime() def c = Event.createCriteria() def results = c { and { le("startDate", date+1) // starts tonight at midnight or prior? ge("endDate", date) // ends today or later? } maxResults(max) order("startDate", "desc") } return results } The Spock Specification package myapp import grails.plugin.spock.* import spock.lang.* class EventServiceSpec extends Specification { def event def eventService = new EventService() def setup() { event = new Event() event.publisher = Mock(User) event.title = 'et' event.urlTitle = 'ut' event.details = 'details' event.location = 'location' event.startDate = new Date(2010,11,20, 9, 0) event.endDate = new Date(2011, 3, 7,18, 0) } def "list the Events of a specific date"() { given: "An event ranging over multiple days" when: "I look up a date for its respective events" def results = eventService.listEventsForDate(searchDate, 0, 100) then: "The event is found or not - depending on the requested date" numberOfResults == results.size() where: searchDate | numberOfResults new Date(2010,10,19) | 0 // one day before startDate new Date(2010,10,20) | 1 // at startDate new Date(2010,10,21) | 1 // one day after startDate new Date(2011, 1, 1) | 1 // someday during the event range new Date(2011, 3, 6) | 1 // one day before endDate new Date(2011, 3, 7) | 1 // at endDate new Date(2011, 3, 8) | 0 // one day after endDate } } The Error groovy.lang.MissingMethodException: No signature of method: static myapp.Event.createCriteria() is applicable for argument types: () values: [] at myapp.EventService.listEventsForDate(EventService.groovy:47) at myapp.EventServiceSpec.list the Events of a specific date(EventServiceSpec.groovy:29)

    Read the article

  • JQuery click anywhere except certain divs and issues with dynamically added html

    - by jCuga
    I want this webpage to highlight certain elements when you click on one of them, but if you click anywhere else on the page, then all of these elements should no longer be highlighted. I accomplished this task by the following, and it works just fine except for one thing (described below): $(document).click(function() { // Do stuff when clicking anywhere but on elements of class suggestion_box $(".suggestion_box").css('background-color', '#FFFFFF'); }); $(".suggestion_box").click(function() { // means you clicked on an object belonging to class suggestion_box return false; }); // the code for handling onclicks for each element function clickSuggestion() { // remove all highlighting $(".suggestion_box").css('background-color', '#FFFFFF'); // then highlight only a specific item $("div[name=" + arguments[0] + "]").css('background-color', '#CCFFCC'); } This way of enforcing the highlighting of elements works fine until I add more html to the page without having a new page load. This is done by .append() and .prepend() What I suspected from debugging was that the page is not "aware" of the new elements that were added to the page dynamically. Despite the new, dynamically added elements having the appropriate class names/IDs/names/onclicks ect, they never get highlighted like the rest of the elements (which continue to work fine the entire time). I was wondering if a possible reason for why my approach does not work for the dynamically added content is that the page is not able to recognize the elements that were not present during the pageload. And if this is a possibility, then is there a way to reconcile this without a pageload? If this line of reasoning is wrong, then the code I have above is probably not enough to show what's wrong with my webpage. But I'm really just interested in whether or not this line of thought is a possibility.

    Read the article

  • The application has stopped unexpectedly: How to Debug?

    - by Android Eve
    Please note, unlike many other questions having the subject title "application has stopped unexpectedly", I am not asking for troubleshooting a particular problem. Rather, I am asking for an outline of the best strategy for an Android/Eclipse/Java rookie to tackle this formidable task of digesting huge amounts of information in order to develop (and debug!) a simple Android application. In my case, I took the sample skeleton app from the SDK, modified it slightly and what did I get the moment I try to run it? The application (process.com.example.android.skeletonapp) has stopped unexpectedly. Please try again. OK, so I know that I have to look LogCat. It's full of timestamped lines staring at me... What do I do now? What do I need to look for? Is there a way to single-step the program, to find the statement that makes the app crash? (I thought Java programs never crash, but apparently I was mistaken) How do I place a breakpoint? Can you recommend an Android debug tutorial online, other than this one?

    Read the article

  • Generic applet style system for publishing mathematics demonstrations?

    - by Alex
    Anyone who's tried to study mathematics using online resources will have come across these Java applets that demonstrate a particular mathematical idea. Examples: http://www.math.ucla.edu/~tao/java/Mobius.html http://www.mathcs.org/java/programs/FFT/index.html I love the idea of this interactive approach because I believe it is very helpful in conveying mathematical principles. I'd like to create a system for visually designing and publishing these 'mathlets' such that they can be created by teachers with little programming experience. So in order to create this app, i'll need a GUI and a 'math engine'. I'll probably be working with .NET because thats what I know best and i'd like to start experimenting with F#. Silverlight appeals to me as a presentation framework for this project (im not worried about interoperability right now). So my questions are: does anything like this exist already in full form? are there any GUI frameworks for displaying mathematical objects such as graphs & equations? are there decent open source libraries that exposes a mathematical framework (Math.NET looks good, just wondering if there is anything else out there) is there any existing work on taking mathematical models/demos built with maple/matlab/octave/mathematica etc and publishing them to the web?

    Read the article

  • How can I dynamically set the default namespace declaration of an XSLT transformation's output XML?

    - by Paralife
    I can do it, but not for the default namespace, using the <xsl:namespace>. If I try to do it for the default namespace: <xsl:namespace name="" select"myUri"/> it never works. It demands that I explicitly define the namespace of the element to be able to use the above null prefix declaration. The reason I want this is because I have a task to transform an input XML file to another output xml. The output XML has many elements and i dont want to have to explicitly set the namespace for every element. Thats why I want to set the default and never bother again. But the default must be computed from some data in the source XML. It does not change during the whole transformation, but it is dependent on input XML data. Any solution? EDIT 1: To sup up: I want to create a namespace dynamically and set it to be the default namespace of the output xml document. The uri of the namespace is derived from some data in the input XML. If I use <xsl:namespace> in my root output element, I cannot create a default namespace for it, only a prefixed one. And even with the prefixed one, it does not propagate to children.

    Read the article

  • Need advice on C++ coding pattern

    - by Kotti
    Hi! I have a working prototype of a game engine and right now I'm doing some refactoring. What I'm asking for is your opinion on usage of the following C++ coding patterns. I have implemented some trivial algorithms for collision detection and they are implemented the following way: Not shown here - class constructor is made private and using algorithms looks like Algorithm::HandleInnerCollision(...) struct Algorithm { // Private routines static bool is_inside(Point& p, Object& object) { // (...) } public: /** * Handle collision where the moving object should be always * located inside the static object * * @param MovingObject & mobject * @param const StaticObject & sobject * @return void * @see */ static void HandleInnerCollision(MovingObject& mobject, const StaticObject& sobject) { // (...) } So, my question is - somebody advised me to do it "the C++" way - so that all functions are wrapped in a namespace, but not in a class. Is there some good way to preserve privating if I will wrap them into a namespace as adviced? What I want to have is a simple interface and ability to call functions as Algorithm::HandleInnerCollision(...) while not polluting the namespace with other functions such as is_inside(...) Of, if you can advise any alternative design pattern for such kind of logics, I would really appreciate that...

    Read the article

  • How to Bind a Command in WPF

    - by MegaMind
    Sometimes we used complex ways so many times, we forgot the simplest ways to do the task. I know how to do command binding, but i always use same approach. Create a class that implements ICommand interface and from the view model i create new instance of that class and binding works like a charm. This is the code that i used for command binding public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); DataContext = this; testCommand = new MeCommand(processor); } ICommand testCommand; public ICommand test { get { return testCommand; } } public void processor() { MessageBox.Show("hello world"); } } public class MeCommand : ICommand { public delegate void ExecuteMethod(); private ExecuteMethod meth; public MeCommand(ExecuteMethod exec) { meth = exec; } public bool CanExecute(object parameter) { return false; } public event EventHandler CanExecuteChanged; public void Execute(object parameter) { meth(); } } But i want to know the basic way to do this, no third party dll no new class creation. Do this simple command binding using a single class. Actual class implements from ICommand interface and do the work.

    Read the article

  • First Time Architecturing?

    - by cam
    I was recently given the task of rebuilding an existing RIA. The new RIA that I've designed is based on Silverlight, with a WCF service to connect to MS SQL Server. This is my first time doing something like this, so I'm not sure how to design the entire thing. Basically, the client can look through graphs of "stocks" (allowing the client to choose different time periods, settings, etc). I've written the whole application essentially, but I'm not sure how to put it together. The graphs are supposed to be directly based on the database, and to create the datapoints on the graph, some calculations need to be done (not very expensive ones). The problem I'm having is to decide where to put the calculations (client or serverside? Or half and half?) What factors should I look for to help me decide where the calculations should be done? And how can I go about optimizing this (caching, etc)? Obviously this is a very broad subject, so I'm not expecting an immediate answer, but any help/pointing in the right direction/resources would be appreciated.

    Read the article

  • Was Visual Studio 2008 or 2010 written to use multi cores?

    - by Erx_VB.NExT.Coder
    basically i want to know if the visual studio IDE and/or compiler in 2010 was written to make use of a multi core environment (i understand we can target multi core environments in 08 and 10, but that is not my question). i am trying to decide on if i should get a higher clock dual core or a lower clock quad core, as i want to try and figure out which processor will give me the absolute best possible experience with Visual Studio 2010 (ide and background compiler). if they are running the most important section (background compiler and other ide tasks) in one core, then the core will get cut off quicker if running a quad core, esp if background compiler is the heaviest task, i would imagine this would b e difficult to seperate in more then one process, so even if it uses multi cores you might still be better off with going for a higher clock cpu if the majority of the processing is still bound to occur in one core (ie the most significant part of the VS environment). i am a vb programmer, they've made great performance improvements in beta 2, congrats, but i would love to be able to use VS seamlessly... anyone have any ideas? thanks, erx

    Read the article

  • Handling large (object) datasets with PHP

    - by Aron Rotteveel
    I am currently working on a project that extensively relies on the EAV model. Both entities as their attributes are individually represented by a model, sometimes extending other models (or at least, base models). This has worked quite well so far since most areas of the application only rely on filtered sets of entities, and not the entire dataset. Now, however, I need to parse the entire dataset (IE: all entities and all their attributes) in order to provide a sorting/filtering algorithm based on the attributes. The application currently consists of aproximately 2200 entities, each with aproximately 100 attributes. Every entity is represented by a single model (for example Client_Model_Entity) and has a protected property called $_attributes, which is an array of Attribute objects. Each entity object is about 500KB, which results in an incredible load on the server. With 2000 entities, this means a single task would take 1GB of RAM (and a lot of CPU time) in order to work, which is unacceptable. Are there any patterns or common approaches to iterating over such large datasets? Paging is not really an option, since everything has to be taken into account in order to provide the sorting algorithm.

    Read the article

  • [php,mysql] insert only adds upto 1000 records and ignoresall records after that.

    - by user560559
    Hello i have a large database where the client stores personal messages and fire email notifications [if allowed by the users]. certain users have the option of sending messages to their entire network of friends. some users have over 5000 friends in their network so if they select the whole network they'll be sending messages to over 5000 friends and system will store all the messages into a table. the problem is this that it does not insert more than 1000 records and ignores all inserts after the first 1000. i have increased the packet size, bulk_insert_buffer_size but still no luck. since the system stores some of the info in another table for reports, every insert returns its new message id. due to this i can not use the "insert into table (column1,column2) values (value1,value2) , (value1,value2)....etc." table engine is innodb, mysql version is 5.1.3 and is hosted on amazon web services. all i want is to fix this issue of inserting more than 1000 records at a time. as mentioned earlier, it works fine but only up to 1000 records and simply ignores all the records after that. i'm using php foreach(){} to insert message for each friend and if email is available, send notification to the user. this foreach(){} also inserts the same record in another table [with only 3 columns] for generating reports. thank you in advance for all the help and support. WMA.

    Read the article

  • What is the correct approach to using GWT with persistent objects?

    - by dankilman
    Hi, I am currently working on a simple web application through Google App engine using GWT. It should be noted that this is my first attempt at such a task. I have run into to following problem/dilema: I have a simple Class (getters/setters and nothing more. For the sake of clarity I will refer to this Class as DataHolder) and I want to make it persistent. To do so I have used JDO which required me to add some annotations and more specifically add a Key field to be used as the primary key. The problem is that using the Key class requires me to import com.google.appengine.api.datastore.Key which is ok on the server side, but then I can't use DataHolder on the client side, because GWT doesn't allow it (as far as I know). So I have created a sister Class ClientDataHolder which is almost identical, though it doesn't have all the JDO annotations nor the Key field. Now this actually works but It feels like I'm doing something wrong. Using this approach would require maintaining to separate classes for each entity I wish to have. So my question is: Is there a better way of doing this? Thank you.

    Read the article

  • MySQL PHP | "SELECT FROM table" using "alphanumeric"-UUID. Speed vs. Indexed Integer / Indexed Char

    - by dropson
    At the moment, I select rows from 'table01' using: SELECT * FROM table01 WHERE UUID = 'whatever'; The UUID column is a unique index. I know this isn't the fastest way to select data from the database, but the UUID is the only row-identifier that is available to the front-end. Since I have to select by UUID, and not ID, I need to know what of these two options I should go for, if say the table consists of 100'000 rows. What speed differences would I look at, and would the index for the UUID grow to large, and lag the DB? Get the ID before doing the "big" select 1. $id = "SELECT ID FROM table01 WHERE UUID = '{alphanumeric character}'"; 2. $rows = SELECT * FROM table01 WHERE ID = $id; Or keep it the way it is now, using the UUID. 1. SELECT FROM table01 WHERE UUID '{alphanumeric character}'; Side note: All new rows are created by checking if the system generated uniqueid exists before trying to insert a new row. Keeping the column always unique. The "example" table. CREATE TABLE Table01 ( ID int NOT NULL PRIMARY KEY, UUID char(15), name varchar(100), url varchar(255), `date` datetime ) ENGINE = InnoDB; CREATE UNIQUE INDEX UUID ON Table01 (UUID);

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >