Search Results

Search found 12887 results on 516 pages for 'small jam'.

Page 431/516 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • Why we can't we overload "=" using friend function?

    - by ashish-sangwan
    Why it is not allowed to overload "=" using friend function? I have written a small program but it is giving error. class comp { int real; int imaginary; public: comp(){real=0; imaginary=0;} void show(){cout << "Real="<<real<<" Imaginary="<<imaginary<<endl;} void set(int i,int j){real=i;imaginary=j;} friend comp operator=(comp &op1,const comp &op2); }; comp operator=(comp &op1,const comp &op2) { op1.imaginary=op2.imaginary; op1.real=op2.real; return op1; } int main() { comp a,b; a.set(10,20); b=a; b.show(); return 0; } The compilation gives the following error :- [root@dogmatix stackoverflow]# g++ prog4.cpp prog4.cpp:11: error: ‘comp operator=(comp&, const comp&)’ must be a nonstatic member function prog4.cpp:14: error: ‘comp operator=(comp&, const comp&)’ must be a nonstatic member function prog4.cpp: In function ‘int main()’: prog4.cpp:25: error: ambiguous overload for ‘operator=’ in ‘b = a’ prog4.cpp:4: note: candidates are: comp& comp::operator=(const comp&) prog4.cpp:14: note: comp operator=(comp&, const comp&)

    Read the article

  • Time taken for memcpy decreases after certain point

    - by tss
    I ve a code which increases the size of the memory(identified by a pointer) exponentially. Instead of realloc, I use malloc followed by memcpy.. Something like this.. int size=5,newsize; int *c = malloc(size*sizeof(int)); int *temp; while(1) { newsize=2*size; //begin time temp=malloc(newsize*sizeof(int)); memcpy(temp,c,size*sizeof(int)); //end time //print time in mili seconds c=temp; size=newsize; } Thus the number of bytes getting copied is increasing exponentially. The time required for this task also increases almost linearly with the increase in size. However after certain point, the time taken abruptly reduces to a very small value and then remains constant. I recorded time for similar code, copyin data(Of my own type) 5 -> 10 - 2 ms 10 -> 20 - 2 ms . . 2560 -> 5120 - 5 ms . . 20480 -> 40960 - 30 ms 40960 -> 91920 - 58 ms 367680 -> 735360 - 2 ms 735360 -> 1470720 - 2 ms 1470720 -> 2941440 - 2 ms What is the reason for this drop in time ? Does a more optimal memcpy method get called when the size is large ?

    Read the article

  • JSON Array Created in PHP/MySQL incorrectly decoded using JQuery

    - by Zak
    I am attempting to make an AJAX call to a very small PHP script that should return me an array that could be echo'd and decoded using JQuery. Here is what I have: My PHP page called to by AJAX: $web_q=mysql_query("select * from sec_u_g where uid='$id' "); $rs = array(); while($rs[] = mysql_fetch_assoc($web_q)) { } print_r(json_encode($rs)); This outputs: [{"id":"3","uid":"39","gid":"16"},{"id":"4","uid":"39","gid":"4"},{"id":"5","uid":"39","gid":"5"},{"id":"6","uid":"39","gid":"6"},{"id":"7","uid":"39","gid":"7"},{"id":"8","uid":"39","gid":"8"},{"id":"9","uid":"39","gid":"9"},false] I don't understand the "false" at the end for one .. But then I send to to JQuery and use: $.each(json.result, function(i, object) { $.each(object, function(property, value) { alert(property + "=" + value); }); }); This just fails. I try to alert "result" by itself which is set by: $.post("get_ug.php",{id:txt},function(result){ }); My output alerts are as follows: 1) The key is '0' and the value is '[' 2) The key is '1' and the value is 'f' 3) The key is '2' and the value is 'a' 4) The key is '3' and the value is 'l' 5) The key is '4' and the value is 's' 6) The key is '5' and the value is 'e' 7) The key is '6' and the value is ']' 8) The key is '7' and the value is ' ' (<-- Yes the line break is there in the alert) I am exhausted from trying different ideas and scripts. Other than setting a delimiter myself and concatenating my own array and decoding it with a custom script, does anyone have any ideas?? Thank you!!

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Is it possible to load an entire SQL Server CE database into RAM?

    - by DanM
    I'm using LinqToSql to query a small SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table via a foreign key, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of the appeal of LinqToSql. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view.

    Read the article

  • Port scanning using threadpool

    - by thenry
    I am trying to run a small app that scans ports and checks to see if they are open using and practicing with threadpools. The console window will ask a number and scans ports from 1 to X and will display each port whether they are open or closed. My problem is that as it goes through each port, it sometimes stops prematurely. It doesn't stop at just one number either, its pretty random. For example it I specify 200. The console will scroll through each port then stops at 110. Next time I run it, it stops at 80. Code Left out some of the things, assume all variables are declared where they should. First part is in Main. static void Main(string[] args) { string portNum; int convertedNum; Console.WriteLine("Scanning ports 1-X"); portNum = Console.ReadLine(); convertedNum = Convert.ToInt32(portNum); try { for (int i = 1; i <= convertedNum; i++) { ThreadPool.QueueUserWorkItem(scanPort, i); Thread.Sleep(100); } } catch (Exception e) { Console.WriteLine("exception " + e); } } static void scanPort(object o) { TcpClient scanner = new TcpClient(); try { scanner.Connect("127.0.0.1",(int)o); Console.WriteLine("Port {0} open", o); } catch { Console.WriteLine("Port {0} closed",o); } } }

    Read the article

  • Help me sort programing languages a bit

    - by b-gen-jack-o-neill
    Hi, so I asked here few days ago about C# and its principles. Now, if I may, I have some additional general questions about some languages, becouse for novice like me, it seems a bit confusing. To be exact I want to ask more about language functions capabilities than syntax and so. To be honest, its just these special functions that bothers me and make me so confused. For exmaple, C has its printf(), Pascal has writeln() and so. I know in basic the output in assembler of these funtions would be similiar, every language has more or less its special functions. For console output, for file manipulation, etc. But all these functions are de-facto part of its OS API, so why is for example in C distinguished between C standard library functions and (on Windows) WinAPI functions when even printf() has to use some Windows feature, call some of its function to actually show desired text on console window, becouse the actuall "showing" is done by OS. Where is the line between language functions and system API? Now languages I dont quite understand - Python, Ruby and similiar. To be more specific, I know they are similiar to java and C# in term they are compiled into bytecode. But, I do not unerstand what are its capabilities in term of building GUI applications. I saw tutorial for using Ruby to program GUI applications on Linux and Windows. But isn´t that just some kind of upgrade? I mean fram other tutorials It seemed like these languages was first intended for small scripts than building big applications. I hope you understand why I am confused. If you do, please help me sort it out a bit, I have no one to ask.

    Read the article

  • How do I migrate from a basic plaintext password authentication to an OAuth based system?

    - by different
    Hello, Found out today that Twitter will be discontinuing its basic authentication for its API; the push is now towards OAuth but I don’t have a clue as to how to use it or whether it’s the right path for me. All I want to be able to do is post a tweet linking to the most recently published post when I hit publish. Currently I’m sending the login credentials for my Twitter account as plaintext, which I realise isn’t that secure but as my site is fairly small it isn’t an issue at least for now. I’m using this basic PHP code: $status = urlencode(stripslashes(urldecode("Test tweet"))); $tweetUrl = 'http://www.twitter.com/statuses/update.xml'; $curl = curl_init(); curl_setopt($curl, CURLOPT_URL, "$tweetUrl"); curl_setopt($curl, CURLOPT_CONNECTTIMEOUT, 2); curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1); curl_setopt($curl, CURLOPT_POST, 1); curl_setopt($curl, CURLOPT_POSTFIELDS, "status=$status"); curl_setopt($curl, CURLOPT_USERPWD, "$username:$password"); $result = curl_exec($curl); $resultArray = curl_getinfo($curl); if ($resultArray['http_code'] == 200) { curl_close($curl); $this->redirect(""); } else { curl_close($curl); echo 'Could not post to Twitter. Please go back and try again.'; } How do I move from this to an OAuth system? Do I need to?

    Read the article

  • What do you call a generalized (non-GUI-related) "Model-View-Controller" architecture?

    - by dcuccia
    I am currently refactoring code that coordinates multiple hardware components for data acquisition, and feeling a bit like I'm recreating the wheel. In particular, an MVC-like pattern seems to be emerging. Except, this has nothing to do with a GUI and I'm worried that I'm forcing this particular pattern where another might be more appropriate. Here's my scenario: Individual hardware "component" classes obey interface contracts for each hardware type. Previously, component instances were orchestrated by a single monolithic InstrumentController class, which relied heavily on configuration + branching logic for executing a specific acquisition sequence. After an iteration, I have a separate controller for each component, with these controllers all managed by a small InstrumentControllerBase (or its derivatives). The composite system will receive "input" either programmatically or via inter-hardware component triggering - in either case these interactions are routed to, and handled by, the appropriate controller. So, I have something that feels MVC-esque, but I don't know if that's because I'm forcing the point. With little direct MVC experience in application development, it's hard to know if I'm just trying to make my scenario fit MVC, where another pattern might be a good alternative or complimentary. My problem is, search results and wiki documentation of these family of patterns seems to immediately drop me into GUI-specific discussions. I understand "M means Model data and the V means View" - but do you call the superset pattern? Component-Commander-Controller? Whence can I exhume examples exemplary?

    Read the article

  • Creating a network adapter - how hard is it?

    - by Vilx-
    I'm interested in building a little (commercial) device on top of Arduino. I want it to be able to interface with network. Network as in standard Ethernet, Cat5, RJ-45, etc. I know that there is an Ethernet Shield, but it costs even more than the Arduino itself, and it's pretty big. Naturally, I want my device to be as small and as cheap as possible. So I'm thinking about recreating an Ethernet module myself. The problem is - I haven't got any experience with Ethernet, nor do I have a good idea where to start looking. Thus I can't even say if my ideas are feasible. Ultimately I would like the device to have three ports - one for incoming signal, two for outgoing, so the device is essentially a little switch where it is plugged in itself as well. The switching capabilities need not be very fast - the volume of data will be low. 10Mbit is more than enough, can be even slower. If that is not possible, a single port for controlling the device itself will also do. Another possibility I'm considering is power line communications - sending information through power lines. That's another area I've no experience with. What hardware should I be looking at, and where can I find information about the necessary software? So - can anyone tell me if these ideas are feasible, and if yes - where should I start looking?

    Read the article

  • Storing an encrypted cookie with Rails

    - by J. Pablo Fernández
    I need to store a small piece of data (less than 10 characters) in a cookie in Rails and I need it to be secure. I don't want anybody being able to read that piece of data or injecting their own piece of data (as that would open up the app to many kinds of attacks). I think encrypting the contents of the cookie is the way to go (should I also sign it?). What is the best way to do it? Right now I'm doing this, which looks secure, but many things looked secure to people that knew much more than I about security and then it was discovered it wasn't really secure. I'm saving the secret in this way: encryptor = ActiveSupport::MessageEncryptor.new(Example::Application.config.secret_token) cookies[:secret] = { :value => encryptor.encrypt(secret), :domain => "example.com", :secure => !(Rails.env.test? || Rails.env.development?) } and then I'm reading it like this: encryptor = ActiveSupport::MessageEncryptor.new(Example::Application.config.secret_token) secret = encryptor.decrypt(cookies[:secret]) Is that secure? Any better ways of doing it? Update: I know about Rails' session and how it is secure, both by signing the cookie and by optionally storing the contents of the session server side and I do use the session for what it is for. But my question here is about storing a cookie, a piece of information I do not want in the session but I still need it to be secure.

    Read the article

  • How to avoid mouse move on Touch

    - by VirtualBlackFox
    I have a WPF application that is capable of being used both with a mouse and using Touch. I disable all windows "enhancements" to just have touch events : Stylus.IsPressAndHoldEnabled="False" Stylus.IsTapFeedbackEnabled="False" Stylus.IsTouchFeedbackEnabled="False" Stylus.IsFlicksEnabled="False" The result is that a click behave like I want except on two points : The small "touch" cursor (little white star) appears where clicked an when dragging. Completely useless as the user finger is already at this location no feedback is required (Except my element potentially changing color if actionable). Elements stay in the "Hover" state after the movement / Click ends. Both are the consequences of the fact that while windows transmit correctly touch events, he still move the mouse to the last main-touch-event. I don't want windows to move the mouse at all when I use touch inside my application. Is there a way to completely avoid that? Notes: Handling touch events change nothing to this. Using SetCursorPos to move the mouse away make the cursor blink and isn't really user-friendly. Disabling the touch panel to act as an input device completely disable all events (And I also prefer an application-local solution, not system wide). I don't care if the solution involve COM/PInvoke or is provided in C/C++ i'll translate. If it is necessary to patch/hook some windows dlls so be it, the software will run on a dedicated device anyway. I'm investigating the surface SDK but I doubt that it'll show any solution. As a surface is a pure-touch device there is no risk of bad interaction with the mouse.

    Read the article

  • How to debug JBoss out of memory problem?

    - by user561733
    Hello, I am trying to debug a JBoss out of memory problem. When JBoss starts up and runs for a while, it seems to use memory as intended by the startup configuration. However, it seems that when some unknown user action is taken (or the log file grows to a certain size) using the sole web application JBoss is serving up, memory increases dramatically and JBoss freezes. When JBoss freezes, it is difficult to kill the process or do anything because of low memory. When the process is finally killed via a -9 argument and the server is restarted, the log file is very small and only contains outputs from the startup of the newly started process and not any information on why the memory increased so much. This is why it is so hard to debug: server.log does not have information from the killed process. The log is set to grow to 2 GB and the log file for the new process is only about 300 Kb though it grows properly during normal memory circumstances. This is information on the JBoss configuration: JBoss (MX MicroKernel) 4.0.3 JDK 1.6.0 update 22 PermSize=512m MaxPermSize=512m Xms=1024m Xmx=6144m This is basic info on the system: Operating system: CentOS Linux 5.5 Kernel and CPU: Linux 2.6.18-194.26.1.el5 on x86_64 Processor information: Intel(R) Xeon(R) CPU E5420 @ 2.50GHz, 8 cores This is good example information on the system during normal pre-freeze conditions a few minutes after the jboss service startup: Running processes: 183 CPU load averages: 0.16 (1 min) 0.06 (5 mins) 0.09 (15 mins) CPU usage: 0% user, 0% kernel, 1% IO, 99% idle Real memory: 17.38 GB total, 2.46 GB used Virtual memory: 19.59 GB total, 0 bytes used Local disk space: 113.37 GB total, 11.89 GB used When JBoss freezes, system information looks like this: Running processes: 225 CPU load averages: 4.66 (1 min) 1.84 (5 mins) 0.93 (15 mins) CPU usage: 0% user, 12% kernel, 73% IO, 15% idle Real memory: 17.38 GB total, 17.18 GB used Virtual memory: 19.59 GB total, 706.29 MB used Local disk space: 113.37 GB total, 11.89 GB used

    Read the article

  • Mgmt wants to re-title my position: Any help...? [closed]

    - by JohnFlyTN
    Management here wants to re-title my position, since I'm doing quite a bit of different work than was originally planned. They want my input. After a quick glance over my skill set and job duties, what would we need to describe this position as? I'll just list things I'm at least proficient in, I will not list things I have a passing knowledge of. About me : ~10 years software development. Languages : C, C++, Perl, PHP, C#, TCL, Unix shell scripting, SQL (TSQL, PLSQL) Systems : MS-Dos, Windows 3.1 to 7 for client, NT 4 to 2008 for server, OS/2, IBM MVS & z/OS, Linux ( multiple distros), AIX Current position: I do all sorts of in-house software. The range is single user apps to large systems spanning multiple OS's. One of the larger projects I've designed and coded is about 100k lines of C#, and a database where I have been the sole designer and maintainer. I have near total freedom to design as I see fit, restraints are usually budgetary. Skills required to replace me in my current role: Windows and Unix admin, Database design, .NET up to 3.5 (C#, ASP.NET), C++, Perl, good skills in designing large and efficient data processing systems. Given this small level of information what would you see this as being titled? (is more information required to render a decision?)

    Read the article

  • calling same function on different buttons not loaded yet

    - by Jordan Faust
    I can not get this to work for every button and I cannot find anything explaining why. I guessing it is something small that I am missing $(document).ready(function() { // delete the selected row from the database $(document).on('click', '#business-area-delete-button', { model: "BusinessArea" }, deleteRow); $(document).on('click', '#business-type-delete-button', { model: "BusinessType" }, deleteRow); $(document).on('click', '#client-delete-button', { model: "Client" }, deleteRow); $(document).on('click', '#client-type-delete-button', { model: "ClientType" }, deleteRow); $(document).on('click', '#communication-channel-type', { model: "CommunicationChannelType" }, deleteRow); $(document).on('click', '#parameter-type-delete-button', { model: "ParameterType" }, deleteRow); $(document).on('click', '#validation-method-delete-button', { model: "ValidationMethod" }, deleteRow); } the event function deleteRow(event){ $.ajax( { type:'POST', data: { id: $(".delete-row").attr("id") }, url:"/mysite/admin/delete" + event.data.model, success:function(data,textStatus){ $('#main-content').html(data); }, error:function(XMLHttpRequest,textStatus,errorThrown){ jQuery('#alerts').html(XMLHttpRequest.responseText); }, complete:function(XMLHttpRequest,textStatus){ placeAlerts() } } ); return false }; This works only for a the button with id validation-method-delete-button. I use document and not the button its self because the button is contained in a template that is loaded later via ajax. I have this working for a similar function that is selecting a row in a table however I am not attempting to pass data in that scenario.

    Read the article

  • Looking for an issue tracker / project management software that automatically manages start/completion dates based on priority/relationships

    - by user361910
    So, a little background. We are a small company with a half-dozen developers. We have been evaluating many project management / issue tracking software packages (TRAC, Redmine, FogBugz, etc) and trying to create a decent process/workflow for managing projects, adding features, fixing bugs, etc. I'd like to think our requirements are similar to most other companies our size. Essentially, what this comes down to is 1) An easy way for the PM and developers to track projects, issues, bugs, etc 2) An easy way for the PM and admin/executives to get a birds-eye view of progress and easily manage timelines, schedules, and priorities. After trying TRAC, we moved to Redmine. We found Redmine to be easier than track to administer and the ability to have sub-projects and sub-tickets is great. However, the big problem we ran into is the fact that it is very difficult to manage schedules and timelines. It seems like it would be incredibly time-intensive to manage because you have to manually enter a start date, estimated time, and end date for each ticket, project, etc. So if you setup a month's schedule based on priorities, what are you supposed to do when a particular ticket/issue/subproject takes up more time than was estimated. Right now, it appears I would have to go back in and MANUALLY change the start/end date of every single item. What would be ideal is to be able to set priorities/dependencies and estimated time on tickets/milestones, and have the software automatically manage the start/end dates. Does anyone know how to get Redmine to do this, or recommend a different software package that can do something like this!

    Read the article

  • Tree iterator, can you optimize this any further?

    - by Ron
    As a follow up to my original question about a small piece of this code I decided to ask a follow up to see if you can do better then what we came up with so far. The code below iterates over a binary tree (left/right = child/next ). I do believe there is room for one less conditional in here (the down boolean). The fastest answer wins! The cnt statement can be multiple statements so lets make sure this appears only once The child() and next() member functions are about 30x as slow as the hasChild() and hasNext() operations. Keep it iterative <-- dropped this requirement as the recursive solution presented was faster. This is C++ code visit order of the nodes must stay as they are in the example below. ( hit parents first then the children then the 'next' nodes). BaseNodePtr is a boost::shared_ptr as thus assignments are slow, avoid any temporary BaseNodePtr variables. Currently this code takes 5897ms to visit 62200000 nodes in a test tree, calling this function 200,000 times. void processTree (BaseNodePtr current, unsigned int & cnt ) { bool down = true; while ( true ) { if ( down ) { while (true) { cnt++; // this can/will be multiple statesments if (!current->hasChild()) break; current = current->child(); } } if ( current->hasNext() ) { down = true; current = current->next(); } else { down = false; current = current->parent(); if (!current) return; // done. } } }

    Read the article

  • multi-threading in MFC

    - by kiddo
    Hello all,in my application there is a small part of function,in which it will read files to get some information,the number of filecount would be utleast 50,So I thought of implementing threading.Say if the user is giving 50 files,I wanted to separate it as 5 *10, 5 thread should be created,so that each thread can handle 10 files which can speed up the process.And also from the below code you can see that some variables are common.I read some articles about threading and I am aware that only one thread should access a variable/contorl at a me(CCriticalStiuation can be used for that).For me as a beginner,I am finding hard to imlplement what I have learned about threading.Somebody please give me some idea with code shown below..thanks in advance file read function:// void CMyClass::GetWorkFilesInfo(CStringArray& dataFilesArray,CString* dataFilesB, int* check,DWORD noOfFiles,LPWSTR path) { CString cFilePath; int cIndex =0; int exceptionInd = 0; wchar_t** filesForWork = new wchar_t*[noOfFiles]; int tempCheck; int localIndex =0; for(int index = 0;index < noOfFiles; index++) { tempCheck = *(check + index); if(tempCheck == NOCHECKBOX) { *(filesForWork+cIndex) = new TCHAR[MAX_PATH]; wcscpy(*(filesForWork+cIndex),*(dataFilesB +index)); cIndex++; } else//CHECKED or UNCHECKED { dataFilesArray.Add(*(dataFilesB+index)); *(check + localIndex) = *(check + index); localIndex++; } } WorkFiles(&cFilePath,dataFilesArray,filesForWork, path, cIndex); dataFilesArray.Add(cFilePath); *(check + localIndex) = CHECKED; }

    Read the article

  • Help me refactor my World Cup Challenge Script

    - by kylemac
    I am setting up a World Cup Challenge between some friends, and decided to practice my Ruby and write a small script to automate the process. The Problem: 32 World Cup qualifiers split into 4 tiers by their Fifa ranking 8 entries Each entry is assigned 1 random team per tier Winner takes all :-) I wrote something that suffices yet is admittedly brute force. But, in my attempt to improve my Ruby, I acknowlege that this code isn't the most elegant solution around - So I turn to you, the experts, to show me the way. It may be more clear to check out this gist - https://gist.github.com/91e1f1c392bed8074531 My Current (poor) solution: require 'yaml' @teams = YAML::load(File.open('teams.yaml')) @players = %w[Player1 Player2 Player3 Player4 Player5 Player6 Player7 Player8] results = Hash.new players = @players.sort_by{rand} players.each_with_index do |p, i| results[p] = Array[@teams['teir_one'][i]] end second = @players.sort_by{rand} second.each_with_index do |p, i| results[p] << @teams['teir_two'][i] end third = @players.sort_by{rand} third.each_with_index do |p, i| results[p] << @teams['teir_three'][i] end fourth = @players.sort_by{rand} fourth.each_with_index do |p, i| results[p] << @teams['teir_four'][i] end p results I am sure there is a better way to iterate through the tiers, and duplicating the @players object ( dup() or clone() maybe?) So from one Cup Fan to another, help me out.

    Read the article

  • fetching the label text from database in C#

    - by Yilmaz Paçariz
    private void button5_Click(object sender, EventArgs e) { SqlConnection conn = new SqlConnection("Data Source=MAZI-PC\\PROJECTACC;Initial Catalog=programDB;Integrated Security=True"); SqlCommand cmd = new SqlCommand("select label_sh from label_text where label_form='2' and label_form_labelID='1'", conn); conn.Open(); label1.Text = cmd.ExecuteReader().ToString(); conn.Close(); SqlConnection conn1 = new SqlConnection("Data Source=MAZI-PC\\PROJECTACC;Initial Catalog=programDB;Integrated Security=True"); SqlCommand cmd1 = new SqlCommand("select label_sh from label_text where label_form='2' and label_form_labelID='2'", conn1); conn1.Open(); label2.Text = cmd1.ExecuteReader().ToString(); conn1.Close(); SqlConnection conn2 = new SqlConnection("Data Source=MAZI-PC\\PROJECTACC;Initial Catalog=programDB;Integrated Security=True"); SqlCommand cmd2 = new SqlCommand("select label_sh from label_text where label_form='2' and label_form_labelID='3'", conn2); conn2.Open(); label3.Text = cmd2.ExecuteReader().ToString(); conn2.Close(); } I am developing a small project in C#... Using Visiual Studio 2010... I want to fetch the label texts from database in order to change the user interface language with a button... I wrote this code but there is a problem in SQLDATAREADER in label text parts it shows System.Data.SqlClient.SqlDataReader I cant fix, could you help me?

    Read the article

  • How to determine if the camera button is half pressed

    - by Matthew
    I am creating a small test camera application, and I would like to be able to implement a feature that allows focus text bars to be present on the screen while the hardware camera button is pressed half way down. I created a camera_ButtonHalfPress event to perform the focus action, but I am unsure of how to toggle the text bars I would like to show on the screen accordingly. Essentially, my goal would be to show the text bars while the camera button is pressed half way down, and then remove them if the button is pressed all the way or the button is released before being pressed all the way down. The button being released is the part I am having trouble with. What I have is as follows: MainPage.xaml.cs private void camera_ButtonHalfPress(object sender, EventArgs e) { //camera.Focus(); // Show the focus brackets. focusBrackets.Visibility = Visibility.Visible; } } private void camera_ButtonFullPress(object sender, EventArgs e) { // Hide the focus brackets. focusBrackets.Visibility = Visibility.Collapsed; camera.CaptureImage(); } } Currently, if the the user decides to release the camera button before it is pressed all the way, the focus brackets persist on the screen. How might I fix this issue?

    Read the article

  • Return the exact ID using fetch_assoc

    - by Selom
    Hi, I have a small problem in my code and I need your help. Well Im using fetch_assoc to get data from database and I need to get the ID number of each of returned values. the issue is that my code only return the ID number of the last data. Here's my code: <form method="post" action="action.php"> <select name="album" style="border:1px solid #CCC; font-size:11px; padding:1px"> <?php $sql = "SELECT * FROM table"; $stmt = $dbh -> prepare($sql); $stmt -> execute(); while($row = $stmt -> fetch(PDO::FETCH_ASSOC)) { $album_ID = $row['album_ID']; $value = $row['album_name']; print "<option value ='". $value ."'>". $value. "</option>"; } ?> </select> <input type="hidden" name="album_ID" value="<?php print $album_ID?>"/> </form> I would like the hidden input type holds the selected album id, but it always holds the album id of the last data. Please help.

    Read the article

  • In Python, how to make sure database connection will always close before leaving a code block?

    - by Cawas
    I want to prevent database connection being open as much as possible, because this code will run on an intensive used server and people here already told me database connections should always be closed as soon as possible. def do_something_that_needs_database (): dbConnection = MySQLdb.connect(host=args['database_host'], user=args['database_user'], passwd=args['database_pass'], db=args['database_tabl'], cursorclass=MySQLdb.cursors.DictCursor) dbCursor = dbConnection.cursor() dbCursor.execute('SELECT COUNT(*) total FROM table') row = dbCursor.fetchone() if row['total'] == 0: print 'error: table have no records' dbCursor.execute('UPDATE table SET field="%s"', whatever_value) return None print 'table is ok' dbCursor.execute('UPDATE table SET field="%s"', another_value) # a lot more of workflow done here dbConnection.close() # even more stuff would come below I believe that leaves a database connection open when there is no row on the table, tho I'm still really not sure how it works. Anyway, maybe that is bad design in the sense that I could open and close a DB connection after each small block of execute. And sure, I could just add a close right before the return in that case... But how could I always properly close the DB without having to worry if I have that return, or a raise, or continue, or whatever in the middle? I'm thinking in something like a code block, similar to using try, like in the following suggestion, which obviously doesn't work: def do_something_that_needs_database (): dbConnection = MySQLdb.connect(host=args['database_host'], user=args['database_user'], passwd=args['database_pass'], db=args['database_tabl'], cursorclass=MySQLdb.cursors.DictCursor) try: dbCursor = dbConnection.cursor() dbCursor.execute('SELECT COUNT(*) total FROM table') row = dbCursor.fetchone() if row['total'] == 0: print 'error: table have no records' dbCursor.execute('UPDATE table SET field="%s"', whatever_value) return None print 'table is ok' dbCursor.execute('UPDATE table SET field="%s"', another_value) # again, that same lot of line codes done here except ExitingCodeBlock: closeDb(dbConnection) # still, that "even more stuff" from before would come below I don't think there is anything similar to ExitingCodeBlock for an exception, tho I know there is the try else, but I hope Python already have a similar feature... Or maybe someone can suggest me a paradigm move and tell me this is awful and highly advise me to never do that. Maybe this is just something to not worry about and let MySQLdb handle it, or is it?

    Read the article

  • Database source control with Oracle

    - by borjab
    I have been looking during hours for a way to check in a database into source control. My first idea was a program for calculating database diffs and ask all the developers to imlement their changes as new diff scripts. Now, I find that if I can dump a database into a file I cound check it in and use it as just antother type of file. The main conditions are: Works for Oracle 9R2 Human readable so we can use diff to see the diferences. (.dmp files doesn't seem readable) All tables in a batch. We have more than 200 tables. It stores BOTH STRUCTURE AND DATA It supports CLOB and RAW Types. It stores Procedures, Packages and its bodies, functions, tables, views, indexes, contraints, Secuences and synonims. It can be turned into an executable script to rebuild the database into a clean machine. Not limitated to really small databases (Supports least 200.000 rows) It is not easy. I have downloaded a lot of demos that does fail in one way or another. EDIT: I wouldn't mind alternatives aproaches provided that they allows us to check a working system against our release DATABASE STRUCTURE AND OBJECTS + DATA in a bath mode. By the way. Our project has been developed for years. Some aproaches can be easily implemented when you make a fresh start but seem hard at this point. EDIT: To understand better the problem let's say that some users can sometimes do changes to the config data in the production eviroment. Or developers might create a new field or alter a view without notice in the realease branch. I need to be aware of this changes or it will be complicated to merge the changes into production.

    Read the article

  • Dynamic change .click value Jquery IE issue.

    - by user326100
    Hello guys.. it's i first time i'm asking here. Sorry if the answer is available already. I have a very small jQuery script that canges the paramether for onclick attr on a DIV. IT works like as right and left arrows for some content in the middle. BAsically i set the onclick="foo(1)" then when get clicked sld change the value 1 to 2, and keep changing everytime i click. on jQuery functio i'm using: $("#v_arrow_r").attr('onclick','').unbind().click(newclick_next); it works like a charm on FF and Chrome, but does not work on IE !!!! Argh... here the the code: if (start == 24) { var a = 0; var b = 0; } else { var a = start-6; var b = start+6; } next = "home_featured_videos(" + b + ");"; newclick_next = eval("(function(){"+next+"});"); prev = "home_featured_videos(" + a + ");"; newclick_prev = eval("(function(){"+prev+"});"); $('#video-module').css('background',''); $('#video-module').html(response); $("#v_arrow_l").attr('onclick','').unbind().click(newclick_prev); $("#v_arrow_r").attr('onclick','').unbind().click(newclick_next); The html: //CONTENT So like i said.. i define the attr onclick when page open. It work well on IE. but when i click the arrow and call the function the oncli is set to null and i add the function to .click. IE stop working. the click is dead. If anybody have idea why this is happening. Thanx in advance. Kind Regards Varois

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >