Search Results

Search found 69140 results on 2766 pages for 'design time'.

Page 275/2766 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • Desktop application validity

    - by Umesh
    I am creating a desktop application using AIR. In that application user is allowed to download some resources which have life span of 2 days. I am storing the date when the user is downloaded. But how can i check whether the date is passed 2 days or not? Right now I am checking with the current system date.But when the user changes the system date to back, it will start to work which i dont want. How the desktop applications say like flex builder and all having trial period.? How are they tracking the dates remaining? ~Umesh

    Read the article

  • View artifacts leaking into the model of MVC

    - by Jono
    In an ASP.NET MVC application (which has very little chance of having its view technology ported to something non-HTML, but whose functional requirements evolve weekly,) how much HTML should ideally be allowed to be directly represented in the Model? I might come across as a design bigot for this, but I regard it as bad practice to allow any view constructs to "leak" into the model in an MVC application (and vice versa). For example, a Model that represents an item you're about to purchase should know nothing about the HTML check box that says "add giftwrap/message", nor should it know about any HTML drop down lists for payment card types. Conversely the View shouldn't be doing work like figuring out button text by translating keys into values (by looking in resource files.)

    Read the article

  • Any ideas for developing a Risc Processor friendly string allocator?

    - by Richard Fabian
    I'm working on some tools to enable high throughput data-oriented development, and one thing that I've not got an immediate answer for is how you go about allocating strings quickly. On risc processors you've got another problem of implementation that the CPU doesn't like branching, which is what I'm trying to minimise or avoid. Also, cache coherence is important on most CPUs, so that's gotta be influential in the design too. So, how would you go about reducing the overhead for a generic string allocator? Sometimes it's easier to solve a more explicit problem, so any ideas for string sizes of 5-30?

    Read the article

  • Top tips for designing GUIs?

    - by oxbow_lakes
    A while back I read (before I lost it) a great book called GUI Bloopers which was full of examples of bad GUI design but also full of useful tidbits like Don't call something a Dialog one minute and a Popup the next. What top tips would you give for designing/documenting a GUI? It would be particularly useful to hear about widgets you designed to cram readable information into as little screen real-estate as possible. I'm going to roll this off with one of my own: avoid trees (e.g. Swing's JTree) unless you really can't avoid it, or have a unbounded hierarchy of stuff. I have found that users don't find them intuitive and they are hard to navigate and filter. PS. I think this question differs from this one as I'm asking for generalist tips

    Read the article

  • Monitoring UDP socket in glib(mm) eats up CPU time

    - by Gyorgy Szekely
    Hi, I have a GTKmm Windows application (built with MinGW) that receives UDP packets (no sending). The socket is native winsock and I use glibmm IOChannel to connect it to the application main loop. The socket is read with recvfrom. My problem is: this setup eats 25% percent CPU time on a 3GHz workstation. Can somebody tell me why? The application is idle in this case, and if I remove the UDP code, CPU usage drops down to almost zero. As the application has to perform some CPU intensive tasks, I could image better ways to spend that 25% Here are some code excerpts: (sorry for the printf's ;) ) /* bind */ void UDPInterface::bindToPort(unsigned short port) { struct sockaddr_in target; WSADATA wsaData; target.sin_family = AF_INET; target.sin_port = htons(port); target.sin_addr.s_addr = 0; if ( WSAStartup ( 0x0202, &wsaData ) ) { printf("WSAStartup failed!\n"); exit(0); // :) WSACleanup(); } sock = socket( AF_INET, SOCK_DGRAM, 0 ); if (sock == INVALID_SOCKET) { printf("invalid socket!\n"); exit(0); } if (bind(sock,(struct sockaddr*) &target, sizeof(struct sockaddr_in) ) == SOCKET_ERROR) { printf("failed to bind to port!\n"); exit(0); } printf("[UDPInterface::bindToPort] listening on port %i\n", port); } /* read */ bool UDPInterface::UDPEvent(Glib::IOCondition io_condition) { recvfrom(sock, (char*)buf, BUF_SIZE*4, 0, NULL, NULL); /* process packet... */ } /* glibmm connect */ Glib::RefPtr channel = Glib::IOChannel::create_from_win32_socket(udp.sock); Glib::signal_io().connect( sigc::mem_fun(udp, &UDPInterface::UDPEvent), channel, Glib::IO_IN ); I've read here in some other question, and also in glib docs (g_io_channel_win32_new_socket()) that the socket is put into nonblocking mode, and it's "a side-effect of the implementation and unavoidable". Does this explain the CPU effect, it's not clear to me? Whether or not I use glib to access the socket or call recvfrom() directly doesn't seem to make much difference, since CPU is used up before any packet arrives and the read handler gets invoked. Also glibmm docs state that it's ok to call recvfrom() even if the socket is polled (Glib::IOChannel::create_from_win32_socket()) I've tried compiling the program with -pg and created a per function cpu usage report with gprof. This wasn't usefull because the time is not spent in my program, but in some external glib/glibmm dll.

    Read the article

  • Duplicate ping response when running Ubuntu as virtual machine (VMWare)

    - by Stonerain
    I have the following setup: My router - 192.168.0.1 My host computer (Windows 7) - 192.168.0.3 And Ubuntu is running as virtual machine on the host. VMWare network settings is Bridged mode. I've modified Ubuntu network settings in /etc/netowrk/interfaces, set the following config: iface eth0 inet static address 192.168.0.220 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 Internet works correctly, I can install packages. But it gets weird if I try to ping something I get this: PING belpak.by (193.232.248.80) 56(84) bytes of data. From 192.168.0.1 icmp_seq=1 Time to live exceeded From 192.168.0.1 icmp_seq=1 Time to live exceeded From 192.168.0.1 icmp_seq=1 Time to live exceeded From 192.168.0.1 icmp_seq=1 Time to live exceeded From 192.168.0.1 icmp_seq=1 Time to live exceeded 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=250 time=17.0 ms 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=249 time=17.0 ms (DUP! ) 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=248 time=17.0 ms (DUP! ) 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=247 time=17.0 ms (DUP! ) 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=246 time=17.0 ms (DUP! ) ^CFrom 192.168.0.1 icmp_seq=2 Time to live exceeded --- belpak.by ping statistics --- 2 packets transmitted, 1 received, +4 duplicates, +6 errors, 50% packet loss, ti me 999ms rtt min/avg/max/mdev = 17.023/17.041/17.048/0.117 ms I think even more interesting are the results of pinging the router itself: stonerain@ubuntu:~$ ping 192.168.0.1 -c 1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. From 192.168.0.3: icmp_seq=1 Redirect Network(New nexthop: 192.168.0.1) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=254 time=6.64 ms --- 192.168.0.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 6.644/6.644/6.644/0.000 ms But if I set -c 2: ... 64 bytes from 192.168.0.1: icmp_seq=1 ttl=252 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=251 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=254 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=253 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=252 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=251 time=13.5 ms (DUP!) From 192.168.0.3: icmp_seq=2 Redirect Network(New nexthop: 192.168.0.1) 64 bytes from 192.168.0.1: icmp_seq=2 ttl=254 time=7.87 ms --- 192.168.0.1 ping statistics --- 2 packets transmitted, 2 received, +256 duplicates, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 6.666/10.141/13.556/2.410 ms Pinging host machine on the other hand works absolutely correctly: no DUPs, no errors. What seems to be the problem and how can I fix it? Thank you.

    Read the article

  • creational pattern for instances depending on multiple subclass instances

    - by markusw
    I have a problem, for that I was not able to identify a suitable design pattern. I want to create instances depending on a given type that has been passed to a factory method. What I am doing until now is the following: T create(SuperType x) { if (x instanceof SubType1) { // do some stuff and return a new SubType extends T } else if (x instanceof SubType2) { // do some stuff and return a new SubType extends T } else if ... } else { throw new UnSupportedOperationException("nothing defined for " + x); } } It seems not to be best pratice for me. Has anybody an idea how to solve this in a better way?

    Read the article

  • System call time out?

    - by Arnold
    Hi, I'm using unix system() calls to gunzip and gzip files. With very large files sometimes (i.e. on the cluster compute node) these get aborted, while other times (i.e. on the login nodes) they go through. Is there some soft limit on the time a system call may take? What else could it be?

    Read the article

  • What do you do before starting on a project?

    - by hahuang65
    I'm still a pretty new project, and I haven't really worked on any large projects yet. However a few projects for school has shown me something I have never really thought of before. Pre-Project planning. One project we ran into a huge problem at the very last minute, and the other project was not divided up between partners very evenly, such that all the work was actually done at the end. So my question to everyone here is: How do you plan out the project beforehand? Please try to cover the following: Design (draw out UI by hand, UMLs, etc.) Division of Labor Timeline (especially how you estimate how much time is needed for certain things) and anything else you can think of. Thanks for all the help!

    Read the article

  • Android OS 2.2 Permissions: I have absolutely no idea why this simple piece of code doesn't work. Wh

    - by Kevin
    I'm just playing around with some code. I create an Activity and simply do something like this: long lo = currentTimeMillis(); System.out.println(lo); lo *= 3; System.out.println(lo); SystemClock.setCurrentTimeMillis(lo); System.out.println( currentTimeMillis() ); Yes, in my AndroidManifest.xml, I've added: <uses-permission android:name="android.permission.SET_TIME"></uses-permission> <uses-permission android:name="android.permission.SET_TIME_ZONE"></uses-permission> Nothing changes. The SystemClock is never reset...it just keeps on ticking. The error that I'm getting just says that the permission "SET_TIME" was not granted to the program. Protection level 3. The permissions are there...and in the API for 2.2 it says that this feature is supported now. I have no idea what I'm doing wrong. If android.content.Intent; comes into play, please explain. I don't really understand what the idea behind intents! Thanks for any help!

    Read the article

  • Finding changes in MongoDB database

    - by Jonathan Knight
    I'm designing a MongoDB database that works with a script that periodically polls a resource and gets back a response which is stored in the database. Right now my database has one collection with four fields , id, name, timestamp and data. I need to be able to find out which names had changes in the data field between script runs, and which did not. In pseudocode, if(data[name][timestamp]==data[name][timestamp+1]) //data has not changed store data in collection 1 else //data has changed between script runs for this name store data in collection 2 Is there a query that can do this without iterating and running javascript over each item in the collection? There are millions of documents, so this would be pretty slow. Should I create a new collection named timestamp for every time the script runs? Would that make it faster/more organized? Is there a better schema that could be used? The script runs once a day so I won't run into a namespace limitation any time soon.

    Read the article

  • Learning to think in the Object Oriented Way

    - by SpikETidE
    Hi Everyone.... I am a programmer trying to learn to code in the object oriented paradigm... I mainly work with PHP and i thought of learning the zend framework... So, felt I need to learn to code in OO PHP.... The problem is, having done code using functions for quite a long time, i just can't get my head to think in the OO way.... Also felt that probably I am not the only one facing this problem since the beginning of time... So, how did you people learn object oriented programming... especially how did you succeed in "unlearning" to code using functions... and learn to see you code as objects...? Is there any good resource books or sites where one could find help...?? Thanks for sharing your knowledge and experiences...

    Read the article

  • Run time Debugging

    - by Prakash
    We have recently downloaded, installed and compiled gcc-3.0.4 code. gcc compiler has built successfully and we where able to compile some same test cpp file. I would like to know how we can modify gcc source code so that we add additional run time debugging statements like the binary in execution compiled by my gcc should print below statement in a log file: filename.cpp::FunctionName#linenumber-statement or any additional information that I can insert via this tailored compiler code Any references would be highly appreciable.

    Read the article

  • Algorithm for non-contiguous netmask match

    - by Gianluca
    Hi, I have to write a really really fast algorithm to match an IP address to a list of groups, where each group is defined using a notation like 192.168.0.0/252.255.0.255. As you can see, the bitmask can contain zeros even in the middle, so the traditional "longest prefix match" algorithms won't work. If an IP matches two groups, it will be assigned to the group containing most 1's in the netmask. I'm not working with many entries (let's say < 1000) and I don't want to use a data structure requiring a large memory footprint (let's say 1-2 MB), but it really has to be fast (of course I can't afford a linear search). Do you have any suggestion? Thanks guys. UPDATE: I found something quite interesting at http://www.cse.usf.edu/~ligatti/papers/grouper-conf.pdf, but it's still too memory-hungry for my utopic use case

    Read the article

  • SEC_TO_TIME() convert to java.sql.Time error

    - by chun
    hi I have a aggregate column present the microsecond, a report(with jasper) have to show HH:mm:ss of this indicator What I did is using SEC_TO_TIME(sum(col)/1000) , but when mapping to java.sql.Time, i doesn't work when the value of hour in result pass over 24(ex:36:33:33) Then I think another way, not using sec_to_time, just mapping the microsecond as Bigdecimal, but dunno what java class shoud i use to format date as the default format of hh:mm:ss is limit to 24...?

    Read the article

  • R: Forecast package: Automatic algorithm for composite model involving ETS and AR

    - by phanikishan
    Hey, I would like to write a code involving automatic selection of a best composite model using ETS as well as autoregressive models. What is the criteria I should base my selection on? Also if I'm using the auto.arima function for deducing number of AR terms and corresponding coefficients from the forecast package in R, does my input series necessarily have to be stationary? or the value for d would be automatically selected thus returning a non-stationary model? Thanks, Phani

    Read the article

  • Fitts Law, applying it to touch screens

    - by Caylem
    Been reading a lot into UI design lately and Fitts Law keeps popping up. Now from what i gather its basically the larger an item is, and the closer it is to your cursor, the easier it is to click on. So what about touch screen devices where the input comes from multiple touches or just single touches. What are the fundamentals to take into account considering this? Should it be something like, the hands of the user are on the sides of the device so the buttons should be close to the left and right hand sides of the device? Thanks

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >