Search Results

Search found 42297 results on 1692 pages for 'run'.

Page 514/1692 | < Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >

  • List modification in Python

    - by user2945143
    We are given an algorithm to modify a list of numbers from 1 to 28. There are 5 steps in the algorithm. We have written functions for each step (all correct). We need to write a function that combines all 5 steps. The algorithm modifies the list to get a value. Each time you get a new value, you use the list created by the algorithm from the previous step. This is what we have gotten so far for the code: get_card_at_top_index(insert_top_to_bottom(triple_cut((move_joker_2( move_joker_1(deck)))))) When we run the code to generate the get_card_at_top_index, the first answer is correct. However, the rest are not. Instead of using from the new list, python uses the value that it generated from the last step. What did we do wrong? UPDATE: The other 5 codes passed the tests, they are correct. Code 1 (List) = list1 Code 2 (list1) = list2 Code 3 (list2) = list3 Code 4 (list3) = list4 Code 5 (list4) = list5 we generate a number from 5. We need to run the algorithm again to generate 25 more numbers. We will use list 5 start from step 1

    Read the article

  • Performance of inter-database query (between linked servers)

    - by Swoosh
    I have an import between 2 linked servers. I basically got to get the data from a multiple join into a table on my side. The current query is something like this: select a.* from db1.dbo.tbl1 a inner join db1.dbo.tbl2 on ... inner join db1.dbo.tbl3 on ... inner join db1.dbo.tbl4 on ... inner join db2.dbo.myside on ... db1 = linked server db2 = my own database After this one, I am using an insert into + select to add this data in my table which is located in db2. (usually few hundred records - this import running once a minute) My question is related to performance. The tables on the linked server (tbl1, tbl2, tbl3, tbl4) are huge tables, with millions of records, and it is slowing down the import process. I was told that, if I do the join on the "other" side (db1 - linked server) for example in a stored procedure, than, even if the query looks the same, it would run faster. Is that right? This is kinda hard to test. Note that the join contains a table from my database too. Also. are there other "tricks" I could use in order to make this run faster? Thanks

    Read the article

  • Execute JavaScript from within a C# assembly

    - by ScottKoon
    I'd like to execute JavaScript code from within a C# assembly and have the results of the JavaScript code returned to the calling C# code. It's easier to define things that I'm not trying to do: I'm not trying to call a JavaScript function on a web page from my code behind. I'm not trying to load a WebBrowser control. I don't want to have the JavaScript perform an AJAX call to a server. What I want to do is write unit tests in JavaScript and have then unit tests output JSON, even plain text would be fine. Then I want to have a generic C# class/executible that can load the file containing the JS, run the JS unit tests, scrap/load the results, and return a pass/fail with details during a post-build task. I think it's possible using the old ActiveX ScriptControl, but it seems like there ought to be a .NET way to do this without using SilverLight, the DLR, or anything else that hasn't shipped yet. Anyone have any ideas? update: From Brad Abrams blog namespace Microsoft.JScript.Vsa { [Obsolete("There is no replacement for this feature. Please see the ICodeCompiler documentation for additional help. http://go.microsoft.com/fwlink/?linkid=14202")] Clarification: We have unit tests for our JavaScript functions that are written in JavaScript using the JSUnit framework. Right now during our build process, we have to manually load a web page and click a button to ensure that all of the JavaScript unit tests pass. I'd like to be able to execute the tests during the post-build process when our automated C# unit tests are run and report the success/failure alongside of out C# unit tests and use them as an indicator as to whether or not the build is broken.

    Read the article

  • iMX31 dependencies?

    - by Abhi
    Dear all I am beginner in an silverlight application. So at first i looked on demo application which is provided by wince 6.0 r3 at location WINCE600\PUBLIC\COMMON\OAK\DEMOS\XAMLPERF - this contains c++ code and WINCE600\PUBLIC\COMMON\OAK\FILES\XAMLPERF - this contains xaml file with the images Now before running this application in an emulator. I at first proceeded with the following: I have first taken my workspace went to catalog item and added "Silverlight for Windows Embedded" from the drop down menu of an catalog item Then right clicked on solution explorer and choosed on properties and under configuration in drop down menu i have selected environment variables where i have added new variable called "sysgen_samplexamlperf" and assigned value as 1 for that variable. Now after rebuiding the application, i have dumped the image into emulator and i found that at desktop of device emulator i can see the exe file to which i run and i can see the application is working fine with 3d effects. Now same thing i proceeded in iMX31 hardware and i was not able to see the application running in a proper manner as it was performing in an emulator. So now what i feel is that there be any dependency when we run the application on hardware. So what can be the dependency? Also in this location "WINCE600\PUBLIC\COMMON\OAK\FILES\XAMLPERF" the images are in png format. So is there any dependency with an image format? Thanks and regards

    Read the article

  • What would be a better implementation of shared variable among subclass

    - by Churk
    So currently I have a spring unit testing application. And it requires me to get a session cookie from a foreign authentication source. Problem what that is, this authentication process is fairly expensive and time consuming, and I am trying to create a structure where I am authenticate once, by any subclass, and any subsequent subclass is created, it will reuse this session cookie without hitting the authentication process again. My problem right now is, the static cookie is null each time another subclass is created. And I been reading that using static as a global variable is a bad idea, but I couldn't think of another way to do this because of Spring framework setting things during run time and how I would set the cookie so that all other classes can use it. Another piece of information. The variable is being use, but is change able during run time. It is not a single user being signed in and used across the board. But more like a Sub1 would call login, and we have a cookie. Then multiple test will be using that login until SubX will come in and say, I am using different credential, so I need to login as something else. And repeats. Here is a outline of my code: public class Parent implements InitializingBean { protected static String BASE_URL; public static Cookie cookie; ... All default InitializingBean methods ... afterPropertiesSet() { cookie = // login process returns a cookie } } public class Sub1 extends Parent { @resource public String baseURL; @PostConstruct public void init() { // set parents with my baseURL; BASE_URL = baseURL; } public void doSomething() { // Do something with cookie, because it should have been set by parent class } } public class Sub2 extends Parent { @resource public String baseURL; @PostConstruct public void init() { // set parents with my baseURL; BASE_URL = baseURL; } public void doSomethingElse() { // Do something with cookie, because it should have been set by parent class } }

    Read the article

  • File name containing more than 16 characters inside parentheses failing

    - by Tom anMoney
    I am generating file names that contain a timestamp in the following format: "base_name (yyyy-mm-dd hhmmss).ext" This seems to cause a problem on Android. Here's my log: /storage/sdcard0/anMoney/transfer/Net worth over time _ Forecast (2012-11-19 110550).pdf E/Gmail (11802): java.io.FileNotFoundException: /storage/sdcard0/myapp/transfer/Net worth over time _ Forecast (2012-11-19 110550).pdf: open failed: ENOENT (No such file or directory) E/Gmail (11802): at libcore.io.IoBridge.open(IoBridge.java:416) E/Gmail (11802): at java.io.FileInputStream.<init>(FileInputStream.java:78) E/Gmail (11802): at java.io.FileInputStream.<init>(FileInputStream.java:105) E/Gmail (11802): at android.content.ContentResolver.openInputStream(ContentResolver.java:445) E/Gmail (11802): at com.google.android.gm.provider.MailEngine.cacheAttachment(MailEngine.java:3054) E/Gmail (11802): at com.google.android.gm.provider.MailEngine.sendOrSaveDraft(MailEngine.java:2746) E/Gmail (11802): at com.google.android.gm.provider.MailProvider.sendOrSaveDraft(MailProvider.java:477) E/Gmail (11802): at com.google.android.gm.provider.MailProvider.insert(MailProvider.java:534) E/Gmail (11802): at android.content.ContentProvider$Transport.insert(ContentProvider.java:201) E/Gmail (11802): at android.content.ContentResolver.insert(ContentResolver.java:864) E/Gmail (11802): at com.google.android.gm.provider.Gmail$MessageModification.sendOrSaveNewMessage(Gmail.java:3576) E/Gmail (11802): at com.google.android.gm.ComposeActivity$SendOrSaveTask$1.onInitializationComplete(ComposeActivity.java:1765) E/Gmail (11802): at com.google.android.gm.provider.MailEngine$5.run(MailEngine.java:1006) E/Gmail (11802): at android.os.Handler.handleCallback(Handler.java:615) E/Gmail (11802): at android.os.Handler.dispatchMessage(Handler.java:92) E/Gmail (11802): at android.os.Looper.loop(Looper.java:137) E/Gmail (11802): at android.os.HandlerThread.run(HandlerThread.java:60) E/Gmail (11802): Caused by: libcore.io.ErrnoException: open failed: ENOENT (No such file or directory) Now, if I trim the file name to have only 16 characters within the parentheses, everything is working as expected. I am able to send the file as a GMail attachment. The following file name is working fine: /storage/sdcard0/myapp/transfer/Net worth over time _ Forecast (2012-11-19 11070).pdf I tried the following troubleshooting: It's not the overall length of the file name, as if I shorten the base name, the same behavior remains It's not GMail, uploading the file to Google Drive fails similarly 16 characters inside the parentheses work, but not 17 It's not the space character inside the parentheses that causes the issue, as I replaced it with a dash and it's the same problem. Anybody has any ideas on what's going on here?

    Read the article

  • Port scanning using threadpool

    - by thenry
    I am trying to run a small app that scans ports and checks to see if they are open using and practicing with threadpools. The console window will ask a number and scans ports from 1 to X and will display each port whether they are open or closed. My problem is that as it goes through each port, it sometimes stops prematurely. It doesn't stop at just one number either, its pretty random. For example it I specify 200. The console will scroll through each port then stops at 110. Next time I run it, it stops at 80. Code Left out some of the things, assume all variables are declared where they should. First part is in Main. static void Main(string[] args) { string portNum; int convertedNum; Console.WriteLine("Scanning ports 1-X"); portNum = Console.ReadLine(); convertedNum = Convert.ToInt32(portNum); try { for (int i = 1; i <= convertedNum; i++) { ThreadPool.QueueUserWorkItem(scanPort, i); Thread.Sleep(100); } } catch (Exception e) { Console.WriteLine("exception " + e); } } static void scanPort(object o) { TcpClient scanner = new TcpClient(); try { scanner.Connect("127.0.0.1",(int)o); Console.WriteLine("Port {0} open", o); } catch { Console.WriteLine("Port {0} closed",o); } } }

    Read the article

  • Concurrent Threads in C# using BackgroundWorker

    - by Jim Fell
    My C# application is such that a background worker is being used to wait for the acknowledgement of some transmitted data. Here is some psuedo code demonstrating what I'm trying to do: UI_thread { TransmitData() { // load data for tx // fire off TX background worker } RxSerialData() { // if received data is ack, set ack received flag } } TX_thread { // transmit data // set ack wait timeout // fire off ACK background worker // wait for ACK background worker to complete // evaluate status of ACK background worker as completed, failed, etc. } ACK_thread { // wait for ack received flag to be set } What happens is that the ACK BackgroundWorker times out, and the acknowledgement is never received. I'm fairly certain that it is being transmitted by the remote device because that device has not changed at all, and the C# application is transmitting. I have changed the ack thread from this (when it was working)... for( i = 0; (i < waitTimeoutVar) && (!bAckRxd); i++ ) { System.Threading.Thread.Sleep(1); } ...to this... DateTime dtThen = DateTime.Now(); DateTime dtNow; TimeSpan stTime; do { dtNow = DateTime.Now(); stTime = dtNow - dtThen; } while ( (stTime.TotalMilliseconds < waitTimeoutVar) && (!bAckRxd) ); The latter generates a very acurate wait time, as compared to the former. However, I am wondering if removal of the Sleep function is interferring with the ability to receive serial data. Does C# only allow one thread to run at a time, that is, do I have to put threads to sleep at some time to allow other threads to run? Any thoughts or suggestions you may have would be appreciated. I am using Microsoft Visual C# 2008 Express Edition. Thanks.

    Read the article

  • What kind of data do I pass into a Django Model.save() method?

    - by poswald
    Lets say that we are getting POSTed a form like this in Django: rate=10 items= [23,12,31,52,83,34] The items are primary keys of an Item model. I have a bunch of business logic that will run and create more items based on this data, the results of some db lookups, and some business logic. I want to put that logic into a save signal or an overridden Model.save() method of another model (let's call it Inventory). The business logic will run when I create a new Inventory object using this form data. Inventory will look like this: class Inventory(models.Model): picked_items = models.ManyToManyField(Item, related_name="items_picked_set") calculated_items = models.ManyToManyField(Item, related_name="items_calculated_set") rate = models.DecimalField() ... other fields here ... New calculated_items will be created based on the passed in items which will be stored as picked_items. My question is this: is it better for the save() method on this model to accept: the request object (I don't really like this coupling) the form data as arguments or kwargs (a list of primary keys and the other form fields) a list of Items (The caller form or view will lookup the list of Items and create a list as well as pass in the other form fields) some other approach? I know this is a bit subjective, but I was wondering what the general idea is. I've looked through a lot of code but I'm having a hard time finding a pattern I like.

    Read the article

  • MySQL left outer join is slow

    - by Ryan Doherty
    Hi, hoping to get some help with this query, I've worked at it for a while now and can't get it any faster: SELECT date, count(id) as 'visits' FROM dates LEFT OUTER JOIN visits ON (dates.date = DATE(visits.start) and account_id = 40 ) WHERE date >= '2010-12-13' AND date <= '2011-1-13' GROUP BY date ORDER BY date ASC That query takes about 8 seconds to run. I've added indexes on dates.date, visits.start, visits.account_id and visits.start+visits.account_id and can't get it to run any faster. Table structure (only showing relevant columns in visit table): create table visits ( `id` int(11) NOT NULL AUTO_INCREMENT, `account_id` int(11) NOT NULL, `start` DATETIME NOT NULL, `end` DATETIME NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE `dates` ( `date` date NOT NULL, PRIMARY KEY (`date`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; dates table contains all days from 2010-1-1 to 2020-1-1 (~3k rows). visits table contains about 400k rows dating from 2010-6-1 to yesterday. I'm using the date table so the join will return 0 visits for days there were no visits. Results I want for reference: +------------+--------+ | date | visits | +------------+--------+ | 2010-12-13 | 301 | | 2010-12-14 | 356 | | 2010-12-15 | 423 | | 2010-12-16 | 332 | | 2010-12-17 | 346 | | 2010-12-18 | 226 | | 2010-12-19 | 213 | | 2010-12-20 | 311 | | 2010-12-21 | 273 | | 2010-12-22 | 286 | | 2010-12-23 | 241 | | 2010-12-24 | 149 | | 2010-12-25 | 102 | | 2010-12-26 | 174 | | 2010-12-27 | 258 | | 2010-12-28 | 348 | | 2010-12-29 | 392 | | 2010-12-30 | 395 | | 2010-12-31 | 278 | | 2011-01-01 | 241 | | 2011-01-02 | 295 | | 2011-01-03 | 369 | | 2011-01-04 | 438 | | 2011-01-05 | 393 | | 2011-01-06 | 368 | | 2011-01-07 | 435 | | 2011-01-08 | 313 | | 2011-01-09 | 250 | | 2011-01-10 | 345 | | 2011-01-11 | 387 | | 2011-01-12 | 0 | | 2011-01-13 | 0 | +------------+--------+ Thanks in advance for any help!

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Strange Access Denied warning when running the simplest C++ program.

    - by DaveJohnston
    I am just starting to learn C++ (coming from a Java background) and I have come across something that I can't explain. I am working through the C++ Primer book and doing the exercises. Every time I get to a new exercise I create a new .cpp file and set it up with the main method (and any includes I think I will need) e.g.: #include <list> #include <vector> int main(int argc, char **args) { } and just to make sure I go to the command prompt and compile and run: g++ whatever.cpp a.exe Normally this works just fine and I start working on the exercise, but I just did it and got a strange error. It compiles fine, but when I run it it says Access Denied and AVG pops up telling me that a threat has been detected 'Trojan Horse Generic 17.CKZT'. I tried compiling again using the Microsoft Compiler (cl.exe) and it runs fines. So I went back, and added: #include <iostream> compiled using g++ and ran. This time it worked fine. So can anyone tell me why AVG would report an empty main method as a trojan horse but if the iostream header is included it doesn't?

    Read the article

  • csv file upload and update mysql db

    - by Indra
    I am very to new to php. i am using the following code to open a csv file and update my database. i need to check the value of first row-first column of the csv file. if it is matching "some text 1", then i need to run code1, if it is "some text 2", run code2, else code3. I can use if else condition but since i am using while loop It fails. Can anyone help me. $handle = fopen($file_tmp,"r"); while(($fileop = fgetcsv($handle,",")) !== false) { // I need to check here $companycode = mysql_real_escape_string($fileop[0]); $Item = mysql_real_escape_string($fileop[3]); $pack = preg_replace('/[^A-Za-z0-9\. -]/', '', $fileop[4]); $lastmonth = mysql_real_escape_string($fileop[5]); $ltlmonth = mysql_real_escape_string($fileop[6]); $op = mysql_real_escape_string($fileop[9]); $pur = mysql_real_escape_string($fileop[10]); $sale = mysql_real_escape_string($fileop[12]); $bal = mysql_real_escape_string($fileop[17]); $bval = mysql_real_escape_string($fileop[18]); $sval = mysql_real_escape_string($fileop[19]); $sq1 = mysql_query("INSERT INTO `sas` (companycode,Item,pack,lastmonth,ltlmonth,op,pur,sale,bal,bval,sval) VALUES ('$companycode','$Item','$pack','$lastmonth','$ltlmonth','$op','$pur','$sale','$bal','$bval','$sval')"); }

    Read the article

  • Managed WMI Event class is not an event class???

    - by galets
    I am using directions from here: http://msdn.microsoft.com/en-us/library/ms257351(VS.80).aspx to create a managed event class. Here's the code that I wrote: [ManagementEntity] [InstrumentationClass(InstrumentationType.Event)] public class MyEvent { [ManagementKey] public string ID { get; set; } [ManagementEnumerator] static public IEnumerable<MyEvent> EnumerateInstances() { var e = new MyEvent() { ID = "9A3C1B7E-8F3E-4C54-8030-B0169DE922C6" }; return new MyEvent[] { e }; } } class Program { static void Main(string[] args) { var thisAssembly = typeof(Program).Assembly; var wmi_installer = new AssemblyInstaller(thisAssembly, null); wmi_installer.Install(null); wmi_installer.Commit(null); InstrumentationManager.RegisterAssembly(thisAssembly); Console.Write("Press Enter..."); Console.ReadLine(); var e = new MyEvent() { ID = "A6144A9E-0667-415B-9903-220652AB7334" }; Instrumentation.Fire(e); Console.Write("Press Enter..."); Console.ReadLine(); wmi_installer.Uninstall(null); } } I can run a program, and it properly installs. Using wbemtest.exe I can browse to the event, and "show mof": [dynamic: ToInstance, provider("WmiTest, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null")] class MyEvent { [read, key] string ID; }; Notice, the class does not inherit from __ExtrinsicEvent, which is weird... I can also run select * from MyEvent, and get the result. Instrumentation.Fire() also returns no error. However, when I'm trying to subscribe to event using "Notification Query" option, I'm getting 0x80041059 Number: 0x80041059 Facility: WMI Description: Class is not an event class. What am I doing wrong, and is there a correct way to create managed WMI event?

    Read the article

  • Could not launch console app with xCode 4.4

    - by lp1756
    I have a project with two targets -- an iOS app and an OSX console app. The latter was created using Xcode File-New Target and selecting "Command Line Tool". This console app is used to prep a default database needed by the iOS app -- using CoreData. This has been working fine until I upgraded to Mountain Lion and xCode 4.4. Now when I try to run the command line tool I get a "Could Not Launch -- permission denied" error. I have tried playing around with signing certificates, to no avail. Interestingly if I create a new "hello, world" command line tool in a new project it works just fine -- and it is not signed at all. I checked the file and it has -rwxr-xr-x permission. In the debugger the app fails on startup even before it tries to access the moms. If I try to run this outside of the debugger at the command line, it ends with a kill 9 message. Any ideas would be greatly appreciated.

    Read the article

  • JBoss admin-console fails to load - missing Log4J jar?

    - by Jack
    I downloaded JBoss 5.1 and unzipped to ~/jboss/ such that JBoss is installed into: ~/jboss/jboss-5.1.0.GA/ I run the default deployment by using the following command found in jboss/jboss-5.1.0.GA/bin ./run.sh -c default While JBoss starts (http://127.0.0.1:8080/), admin-console is not deployed. The log file: jboss/jboss-5.1.0.GA/server/default/log shows the following information: DEPLOYMENTS IN ERROR: Deployment "vfsfile:/Users/jackwootton/jboss/jboss-5.1.0.GA/server/default/deploy/admin-console.war/" is in error due to the following reason(s): org.jboss.deployers.spi.DeploymentException: URL file:/Users/jackwootton/jboss/jboss-5.1.0.GA/server/default/tmp/az6n6v-tjilfb-h32fokxn-1-h32fosuo-v/admin-console.war/ deployment failed Deployment "vfszip:/Users/jackwootton/jboss/jboss-5.1.0.GA/server/default/deploy/quartz-ra.rar/" is in error due to the following reason(s): org.apache.commons.logging.LogConfigurationException: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable. The Log4J jar file exists in: jboss/jboss-5.1.0.GA/lib/jboss-logging-log4j.jar I have three questions: Have I understood the problem correctly (i.e. that admin-console cannot find the required Log4j JAR file and therefore is not deployed)? What can I do to fix this problem? Why would an out-of-the-box deployment have this problem in the first place?

    Read the article

  • Iphone Core Data Internal Inconsistency

    - by kiyoshi
    This question has something to do with the question I posted here: http://stackoverflow.com/questions/1230858/iphone-core-data-crashing-on-save however the error is different so I am making a new question. Now I get this error when trying to insert new objects into my managedObjectContext: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '"MailMessage" is not a subclass of NSManagedObject.' But clearly it is: @interface MailMessage : NSManagedObject { .... And when I run this code: NSManagedObjectModel *managedObjectModel = [[self.managedObjectContext persistentStoreCoordinator] managedObjectModel]; NSEntityDescription *entity =[[managedObjectModel entitiesByName] objectForKey:@"MailMessage"]; NSManagedObject *newObject = [[NSManagedObject alloc] initWithEntity:entity insertIntoManagedObjectContext:self.managedObjectContext]; It runs fine when I do not present an MFMailComposeViewController, but if I run this code in the - (void)mailComposeController:(MFMailComposeViewController*)controller didFinishWithResult:(MFMailComposeResult)result error:(NSError*)error { method, it throws the above error when creating the newObject variable. The entity object when I use print object produces the following: (<NSEntityDescription: 0x1202e0>) name MailMessage, managedObjectClassName MailMessage, renamingIdentifier MailMessage, isAbstract 0, superentity name (null), properties { in both cases, so I don't think the managedObjectContext is completely invalid. I have no idea why it would say MailMessage is not a subclass of NSManagedObject at that point, and not at the other. Any help would be appreciated, thanks in advance.

    Read the article

  • Zend dubugger in eclipse - timeout everytime

    - by jax
    I am running a virtual server in the US. I am trying to get my eclipse machine at home (outside the USA), to connect to the USA server. I have setup Zend on the server. When I run phpinfo() I get the following zend output. Note: 1.2.3.4 will be the external WAN IP address of my ADSL router at home. Directive Local Value Master Value zend_debugger.allow_hosts 127.0.0.1,1.2.3.4 127.0.0.1,1.2.3.4 zend_debugger.allow_tunnel no value no value zend_debugger.deny_hosts no value no value zend_debugger.expose_remotely always always zend_debugger.httpd_uid -1 -1 zend_debugger.max_msg_size 2097152 2097152 zend_debugger.tunnel_max_port 65535 65535 zend_debugger.tunnel_min_port 1024 1024 So zend looks like it is working ok on the server side. When I run a debug session and select 'Test Debugger' I get a timeout every time. I have already added dummy.php to the root folder of the server. In 'installed debuggers' I double clicked on Zend and have put my external WAN IP address. I noticed that the port is 10000, I also have webmin running on this port on the server, will there be a conflict?

    Read the article

  • How to handle 30k files in a project which requires them?

    - by Jeremiah
    Visual Studio 2010 RC - Silverlight Application We have a library of images that we need to have access to. They are given to us from a vendor (through an installer) and they are not in a database, they are files in a folder (a very large monster of a folder). We do not control when the images change, so the vendor needs to be able to override them individually. We get updates frequently enough from this vendor to state that these images change "randomly" and without our (programmer) knowledge. The problem: I don't want 30K images in SVN. Heck, I don't even want to imagine them in my Solution. However, our application requires them in order to run properly. So, our build/staging servers need access to these images (we have two build servers). The Question: How would you handle it when your application will not work as specified without access to each of 30k images and you don't control when those images change? I'm do not want to have a crazy large SVN repository. Because I don't know when any of these images change, I really don't want them in my solution (definitely do not want a large solution, either). I also don't want a bunch of manual steps to do every time these images change. Our mantra, up to this point, has always been, any developer could download from SVN, compile and run our app. These images are going to kill that mantra. I'm tempted to make a WCF service that will return images if they exist and a dummy image if they don't. This way all dev boxes will return a dummy image and our build/staging/production boxes will return real images (ones that actually have the vendor's image installer installed on). This has to be a solved problem. What have other people done to handle these types of problems? I'm open to suggestions.

    Read the article

  • Apache Cordova (Phonegap): is jsop needed for cross-site scripting?

    - by DEX
    I've just started using Apache Cordova. I have an library that makes calls (via ajax) to a soap server. When I run these on my local machine in chrome, I get cross site scripting errors when trying to make calls to the service. When I run the same exact code using the Cordova browser in the iOS emulator, the scripts seem to hit the server fine and the response data is received properly. So my question is how is the Cordova browser able to make these requests without cross-site scripting permissions & JSONP ? One thing I noticed is that when the request is sent from iOS, there is no "Origin" header. Is this allowing the Cordova browser to stealthily circumvent cross-site scripting requirements? Is it possible that the node.js server on the device (I believe this is how Cordova works) is manipulating the headers to allow this? I'd like to avoid enabling cross-site scripting on my site so I think this "feature" is nice, but I'm wondering if it's a security hole as well. Anyone have experience with this?

    Read the article

  • LINQ to SQL - Lightweight O/RM?

    - by CoffeeAddict
    I've heard from some that LINQ to SQL is good for lightweight apps. But then I see LINQ to SQL being used for Stackoverflow, and a bunch of other .coms I know (from interviewing with them). Ok, so is this true? for an e-commerce site that's bringing in millions and you're typically only doing basic CRUDs most the time with the exception of an occasional stored proc for something more complex, is LINQ to SQL complete enough and performance-wise good enough or able to be tweaked enough to run happily on an e-commerce site? I've heard that you just need to tweak performance on the DB side when using LINQ to SQL for a better approach. So there are really 2 questions here: 1) Meaning/scope/definition of a "Lightweight" O/RM solution: What the heck does "lightweight" mean when people say LINQ to SQL is a "lightweight O/RM" and is that true??? If this is so lightweight then why do I see a bunch of huge .coms using it? Is it good enough to run major .coms (obviously it looks like it is) and what determines what the context of "lightweight" is...it's such a generic statement. 2) Performance: I'm working on my own .com and researching different O/RMs. I'm not really looking at the Entity Framework (yet), just want to figure out the LINQ to SQL basics here and determine if it will be efficient enough for me. The problem I think is you can't tweak or control the SQL it generates...

    Read the article

  • Is it possible to create multi-tiered WHERE statements in mySQL

    - by Brendan
    I'm currently developing a program that will generate reports based upon lead data. My issue is that I'm running 3 queries for something that I would like to only have to run one query for. For instance, I want to gather data for leads generated in the past day submission_date > (NOW() - INTERVAL 1 DAY) and I would like to find out how many total leads there were, and how many sold leads there were in that timeframe. (sold=1 / sold=0). The issue comes with the fact that this query is currently being done with 2 queries, one with WHEREsold= 1 and one with WHEREsold= 0. This is all well and good, but when I want to generate this data for the past day,week,month,year,and all time I will have to run 10 queries to obtain this data. I feel like there HAS to be a more efficient way of doing this. I know I can create a mySQL function for this, but I don't see how this could solve the problem. Thanks!!

    Read the article

  • Finding the digit root of a number

    - by Jessica M.
    Study question is to find the digit root of a already provided number. The teacher provides us with the number 2638. In order to find the digit root you have to add each digit separately 2 + 6 + 3 + 8 = 19. Then you take the result 19 and add those two digits together 1 + 9 = 10. Do the same thing again 1 + 0 = 1. The digit root is 1. My first step was to use the variable total to add up the number 2638 to find the total of 19. Then I tried to use the second while loop to separate the two digits by using the % I have to try and solve the problem by using basic integer arithmetic (+, -, *, /). 1.Is it necessary and or possible to solve the problem using nested while loops? 2.Is my math correct? 3. As I wrote it here it does not run in Eclipse. Am I using the while loops correctly? import acm.program.*; public class Ch4Q7 extends ConsoleProgram { public void run(){ println("This program attempts to find the digit root of your number: "); int n = readInt("Please enter your number: "); int total = 0; int root = total; while (n > 0 ){ total = total + (n %10); n = (n / 10); } while ( total > 0 ){ root = total; total = ((total % 10) + total / 10); } println("your root should be " + root); } }

    Read the article

  • Is it advisable to have an interface as the return type?

    - by wb
    I have a set of classes with the same functions but with different logic. However, each class function can return a number of objects. It is safe to set the return type as the interface? Each class (all using the same interface) is doing this with different business logic. protected IMessage validateReturnType; <-- This is in an abstract class public bool IsValid() <-- This is in an abstract class { return (validateReturnType.GetType() == typeof(Success)); } public IMessage Validate() { if (name.Length < 5) { validateReturnType = new Error("Name must be 5 characters or greater."); } else { validateReturnType = new Success("Name is valid."); } return validateReturnType; } Are there any pitfalls with unit testing the return type of an function? Also, is it considered bad design to have functions needing to be run in order for them to succeed? In this example, Validate() would have to be run before IsValid() or else IsValid() would always return false. Thank you.

    Read the article

  • assign characters to key combinations in XP or Visual Studio .Net

    - by cpj
    I'm running Mac OSX on a MacBookPro (UK keyboard). I run windows XP under parallels in a VM. I run Visual Studio .Net 2003 and 2008 in XP in the VM when i need to. I have English United Kingdom and English United states keyboards setup in XP. (they switch sometimes for no apparent reason) There is no hash "#" key on my mac's keyboard. However, in OSX I can get a hash with an alt+3 key combination. But In Windows XP... I can not make a "#" character. I can go to the character map in windows and copy a hash.. switch into OSX and copy a hash.. search in code and copy a hash.. but I can not make a hash in XP using my keyboard without typing U+0023: ... which you can imagine is annoying. coding anything with hash symbols is becoming a choir. Anyone got any advice or key mapping tricks I can use to get hash characters working in XP using my mac UK keyboard?

    Read the article

< Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >