Search Results

Search found 59272 results on 2371 pages for 'time stamp'.

Page 309/2371 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • Languages with a clear distinction between subroutines that are purely functional, mutating, state-changing, etc?

    - by CPX
    Lately I've become more and more frustrated that in most modern programming languages I've worked with (C/C++, C#, F#, Ruby, Python, JS and more) there is very little, if any, language support for determining what a subroutine will actually do. Consider the following simple pseudo-code: var x = DoSomethingWith(y); How do I determine what the call to DoSomethingWith(y) will actually do? Will it mutate y, or will it return a copy of y? Does it depend on global or local state, or is it only dependent on y? Will it change the global or local state? How does closure affect the outcome of the call? In all languages I've encountered, almost none of these questions can be answered by merely looking at the signature of the subroutine, and there is almost never any compile-time or run-time support either. Usually, the only way is to put your trust in the author of the API, and hope that the documentation and/or naming conventions reveal what the subroutine will actually do. My question is this: Does there exist any languages today that make symbolic distinctions between these types of scenarios, and places compile-time constraints on what code you can actually write? (There is of course some support for this in most modern languages, such as different levels of scope and closure, the separation between static and instance code, lambda functions, et cetera. But too often these seem to come into conflict with each other. For instance, a lambda function will usually either be purely functional, and simply return a value based on input parameters, or mutate the input parameters in some way. But it is usually possible to access static variables from a lambda function, which in turn can give you access to instance variables, and then it all breaks apart.)

    Read the article

  • Outlook 2007 + Exchange 2010 (Save All Attachments)

    - by RobertPitt
    About 3 weeks back our company upgraded our mail system to Exchange 2010, all went smooth, few issues but nothing major. A few days ago we had a call from a colleague where he was unable to save all attachments, From File > Save As > Save All Attachments. When the email has a single attachment it works perfectly normal, and depending on the file type it allows you to save multiple attachments. But there's a lot of file types that will not work, such as zip, pdf, doc etc, Usually we get a location box open up asking where we would like to drop the attachments, but it does nothing, You click Save All Attachments and nothing happens. After hours of research I have come across mixed results, a lot of people on forums have been explaining that they have recently crossed over to Exchange 2010 and there issues started there. But on the other hand Microsoft released a KB (278188) which was depressing if that, but that article was published in 2007, as stated by the time stamp, and Exchange 2010 has only come out recently. Im looking to see if you guys have any clues what could be causing this, anything server side that I can take a look at (AD, Exchange, ...). Any help on this is greatly supported

    Read the article

  • "Unable to install GRUB in /dev/sda" when installing GRUB

    - by vicban3d
    I recently bought a shiny new Lenovo Yoga 2 Pro and I want to dual boot it with Ubuntu for studying purposes. Its built-in OS is Windows 8.1 and it has a 256GB SSD. I've made a separate 90GB partition just for Ubuntu and a live USB to install it. The first time everything seemed to work great, I solved the wifi issued by blacklisting ideapad_laptop, the installation went flawlessly and Ubuntu worked fine. When I got up the next morning and turned on my laptop it booted into Windows right away without ever showing the GRUB menu. So I tried to reset, and checked my partitions with the Disk Manager and everything looked fine. Since I couldn't find a solution online I went ahead and formatted the partition to try and install again. This time and every time since, the installation was aborted and I got a fatal error saying: Unable to install GRUB in /dev/sda Executing `grub-install /dev/sda` failed. This is a fatal error. Can anyone please suggest a solution to this problem? If any further information is needed I would be happy to provide it. Thanks. When installing I get the following in details: ubuntu kernel: [ 1946.372741] FAT-fs (sda2): error, fat_get_cluster: invalid cluster chain (i_pos 0). ubuntu grub-installer: error: Running 'grub-install --force failed.

    Read the article

  • Moving while doing loop animation in RPGMaker

    - by AzDesign
    I made a custom class to display character portrait in RPGMaker XP Here is the class : class Poser attr_accessor :images def initialize @images = Sprite.new @images.bitmap = RPG::Cache.picture('Character.png') #100x300 @images.x = 540 #place it on the bottom right corner of the screen @images.y = 180 end end Create an event on the map to create an instance as global variable, tada! image popped out. Ok nice. But Im not satisfied, Now I want it to have bobbing-head animation-like (just like when we breathe, sometimes bob our head up and down) so I added another method : def move(x,y) @images.x += x @images.y += y end def animate(x,y,step,delay) forward = true 2.times { step.times { wait(delay) if forward move(x/step,y/step) else move(-x/step,-y/step) end } wait(delay*3) forward = false } end def wait(time) while time > 0 time -= 1 Graphics.update end end I called the method to run the animation and it works, so far so good, but the problem is, WHILE the portrait goes up and down, I cannot move my character until the animation is finished. So that's it, I'm stuck in the loop block, what I want is to watch the portrait moving up and down while I walk around village, talk to npc, etc. Anyone can solve my problem ? Or better solution ? Thanks in advance

    Read the article

  • Constituent Experience Counts In Public Sector

    - by Michael Seback
      Businesses and government organizations are operating in an era of the empowered customer where service  and communication channels are challenged every day.  Consumers in the private sector have high expectations from purchasing gifts online, reading reviews on social sites, and expecting the companies they do business with to know and reward them.   In the Public Sector, constituents also expect government organizations to provide consistent and timely service across agencies and touch points.  Examples include requesting critical city services, applying for social assistance or reviewing insurance plans for a health insurance exchange. If an individual does not receive the services they need at the right time and place, it can create a dire situation – involving housing, food or healthcare assistance. Government organizations need to deliver a fast, reliable and personalized experience to constituents. Look at a few recent statistics from a Government focused survey: How do you define good customer service? 70 % improved services, 48% shortest time to provide information, 44% shortest time to resolve complaints What are ways/opportunities to improve customer service? 69% increased collaboration across agencies and 41% increased customer service channels Are you using data collected to make informed decisionsto improve customer service efforts? 39% data collection is limited, not used to improve decision making Source: Re-Imagining Customer Service in Government, 2012 Click here to see the highlights.  Would you like to get started – read Eight Steps to great constituent experiences for government.

    Read the article

  • TechEd North America 2012–Day 3 #msTechEd #teched

    - by Marco Russo (SQLBI)
    Yesterday I spent the longest day at this TechEd: we talked with many people at Community Night until 9pm and I have to say that just a few months after Analysis Services 2012 has been released, there are many people already using it. And the adoption of PowerPivot is starting to be quite large. Many new ideas and challenging coming from several different real world scenarios. I was tired but really happy. Alberto presented his Many-to-Many Relationships in BISM Tabular session that was in the same time slot of the BI Power Hour. For this reason, very few people attended Alberto’s session so I think many will watch the recorded session (it should be available within a few days). So what about today? I’ll spend some time at Technical Learning Center area (full schedule here) but the most important event today will be the Querying multi-billion rows with many to many relationships in SSAS Tabular (xVelocity) at the Private Cloud, Public Cloud and Data Platform Theater in the Technical Learning Center area (next to the SQL Server 2012 zone).  Why you should attend? Mainly because you will see live demo over 4 billion rows table with many-to-many relationships involved in complex queries. But for those of you that think this is not enough to attend a 15 minute funny session, well, we’ll give away some 8GB USB Memory Keys to those of you that will guess exact response time of queries before execution. Convinced? Join us at 11:15am and don’t be late, the session will finish at 11:30am! After that, we’ll run a book signing session at the Bookstore at 12:30pm and I will be in the Technical Learning Center area at 3:00pm until 5:00pm. See you there!

    Read the article

  • What should I recommend a small company looking for C# developers

    - by Coder
    Here is the issue. I am a senior developer, and one of the start-ups I designed the system (management system/database/web) a long time ago, have grown and need software updates. I have left their system to another developer long time ago, but apparently he has left the job, and so they are asking me if I can suggest them where to find a new one. The problem is that the company has no clue that the IT is not cheap. They expect multiple features to be added for 40$, so that's an issue. Actually one of the reasons why I left the project when I did. Lots of expectations, little pay, also I know those people outside work, so I decided to avoided stressing the nonwork-relationships and left the project gracefully. Today they asked me for an advice, and I told them that the feature list they want is probably going to cost some if they'll get a senior developer for the job. So I guess their best bet is to find someone who loves coding and has just finished the school. Which would give someone a chance to code for money which is good for a student, and at the same time, allow the student to get some hands on experience. Then again, the system is not exactly 20 line console program, there is an MSSQL database, ASP.NET web page and content management system with all the AJAX stuff and some other things. So student straight out of school could have some problems with that. But, I thought about the issue some more, and I think that junior developer is a tricky deal, without mentoring, he can either screw up royally, or just do what's asked. Also, it seems no one is coming to interviews at all, which is weird, or maybe not. What should I suggest them?

    Read the article

  • GLES2.0 3D Android game performance and multi threading the update?

    - by Ofer
    I have profiled my mixed Java\C++ Android game and I got the following result: https://dl.dropbox.com/u/8025882/PompiDev/AndroidProfile.png As you can see, the pink think is a C++ functions that updates the game. It does things like updating the logic but it mostly it generates a "request list" for rendering. The thing is, I generate DrawLists on C++ and then send them to Java to process and draw using GLES2.0. Since then I was able to improve update from 9ms down to about 7ms, but I would like to ask if I would benefit from multi threading the update? As I understand from that diagram is that the function that takes the most time is the one you see it's color on the timeline. So the pink area is taken mostly by update. The other area has MainOpenGL.Handle as it's main contributor(whch is my java function), but since it's not drawn to the top of the diagram I can conclude other things are happening at the same time that use the CPU? Or even GPU stuff that isn't shown in this diagram. I am not sure how the GPU works on this. Does it calculate stuff in parallel to the CPU? Or is it part of the CPU usage as in SoC? I am not sure. Anyway, in case GPU things DO happen in parallel to CPU, then I would guess that if I do this C++ Update in parallel to the thread that makes the OpenGL calls, I might make use of "dead CPU time" due to GPU stalling or maybe have the GPU calls getting processed earlier because it won't have to wait for Update to finish? How do you suggest to improve performance based on that? Thanks.

    Read the article

  • How to manage many mobile device users at server side?

    - by Rami
    I built a social Android application in which users can see other users around them by GPS location. At the beginning thing went well as I had low number of users, but now that I have increasing number of users (about 1500 +100 every day) it has revealed a major problem in my design. In my Google App Engine servlet I have static HashMap that holds all the users profiles objects, currently 1500 and this number will increase as more users register. Why I'm doing it? Every user that requests for the users around him compares his GPS with other users and checks if they are in his 10km radius. This happens every five minutes on average. Consequently, I can't get the users from db every time because GAE read/write operation quota will tear me apart. The problem with this design is? As the number of users increases, the Hashmap turns to null every 4-6 hours, I think that this time is getting shorter, but I'm not sure. I'm fixing this by reloading the users from the db every time I detect that it becomes null, but this causes DOS to my users for 30 sec, so I'm looking for better solution. I'm guessing that it happens because the size of the hashmap. Am I right? I have been advised to use a spatial database, but that means that I can't work with GAE any more and it means that I need to build my big server all over again and lose my existing DB. Is there something I can do with the existing tools? Thanks.

    Read the article

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • How do you price your work?

    - by Dr.Kameleon
    Well, let me explain : This has really been an issue for me, for such a long time. And what is worse - since coding is something I simply ADORE (I would definitely do it, even if there was no payment involved whatsoever..) - is that I always end up feeling somewhat awkward... Anyway... So, here's the deal : You start working on a project, you may have something in your mind, and even if you're lucky enough and the client needs no "cost estimates" beforehand, sooner or later you'll face the ultimate dilemma of pricing your own work. So, how do YOU do it? By estimating the time you put into it? (obviously, this is not exact, 'coz perhaps a more capable coder will need much less time for the very same thing than a not-so-competent coder + even the very same coder may not "perform" equally at all times) By the Lines of code you've written? (obviously, this is not a measure either : a 10-line script that does exactly the same with a 1000-line script is, at least for me, "better") By taking into account the level of complexity of the project and, perhaps, how specialised the subject is? By taking into account other factors? (e.g. the value of the project for your customer)

    Read the article

  • How to cache queries in EJB and return result efficient (performance POV)

    - by Maxym
    I use JBoss EJB 3.0 implementation (JBoss 4.2.3 server) At the beginning I created native query all the time using construction like Query query = entityManager.createNativeQuery("select * from _table_"); Of couse it is not that efficient, I performed some tests and found out that it really takes a lot of time... Then I found a better way to deal with it, to use annotation to define native queries: @NamedNativeQuery( name = "fetchData", value = "select * from _table_", resultClass=Entity.class ) and then just use it Query query = entityManager.createNamedQuery("fetchData"); the performance of code line above is two times better than where I started from, but still not that good as I expected... then I found that I can switch to Hibernate annotation for NamedNativeQuery (anyway, JBoss's implementation of EJB is based on Hibernate), and add one more thing: @NamedNativeQuery( name = "fetchData2", value = "select * from _table_", resultClass=Entity.class, readOnly=true) readOnly - marks whether the results are fetched in read-only mode or not. It sounds good, because at least in this case of mine I don't need to update data, I wanna just fetch it for report. When I started server to measure performance I noticed that query without readOnly=true (by default it is false) returns result with each iteration better and better, and at the same time another one (fetchData2) works like "stable" and with time difference between them is shorter and shorter, and after 5 iterations speed of both was almost the same... The questions are: 1) is there any other way to speed query using up? Seems that named queries should be prepared once, but I can't say it... In fact if to create query once and then just use it it would be better from performance point of view, but it is problematic to cache this object, because after creating query I can set parameters (when I use ":variable" in query), and it changes query object (isn't it?). well, is here any way to cache them? Or named query is the best option I can use? 2) any other approaches how to make results retrieveng faster. I mean, for instance I don't need those Entities to be attached, I won't update them, all I need is just fetch collection of data. Maybe readOnly is the only available way, so I can't speed it up, but who knows :) P.S. I don't ask about DB performance, all I need now is how not to create query all the time, so use it efficient, and to "allow" EJB to do less job with the same result concerning data returning.

    Read the article

  • How to approach scrum task burn down when tasks have multiple peoples involvement?

    - by AgileMan
    In my company, a single task can never be completed by one individual. There is going to be a separate person to QA and Code Review each task. What this means is that each individual will give their estimates, per task, as to how much time it will take to complete. The problem is, how should I approach burn down? If I aggregate the hours together, assume the following estimate: 10 hrs - Dev time 4 hrs - QA 4 hrs - Code Review. Task Estimate = 18hrs At the end of each day I ask that the task be updated with "how much time is left until it is done". However, each person generally just thinks about their part of it. Should they mark the effort remaining, and then ADD the effort estimates to that? How are you guys doing this? UPDATE To help clarify a few things, at my organization each Task within a story requires 3 people. Someone to develop the task. (do unit tests, ect...) A QA specialist to review task (they primarily do integration and regression tests) A Tech lead to do code review. I don't think there is a wrong way or a right way, but this is our way ... and that won't be changing. We work as a team to complete even the smallest level of a story whenever possible. You cannot actually test if something works until it is dev complete, and you cannot review the quality of the code either ... so the best you can do is split things up into small logical slices so that the bare minimum functionality can be tested and reviewed as early into the process as possible. My question to those that work this way would be how to burn down a "task" when they are setup this way. Unless a Task has it's own sub-tasks (which JIRA doesn't allow) ... I'm not sure the best way to accomplish tracking "what's left" on a daily basis.

    Read the article

  • Dynamic (C# 4.0) &amp; Var in a nutshell.

    - by mbcrump
    A Var is static typed - the compiler and runtime know the type. This can be used to save some keystrokes. The following are identical. Code Snippet var mike = "var demo"; Console.WriteLine(mike.GetType());  //Returns System.String   string mike2 = "string Demo"; Console.WriteLine(mike2.GetType()); //Returns System.String A dynamic behaves like an object, but with dynamic dispatch. The compiler doesn’t know anything about it at compile time. Code Snippet dynamic duo = "dynamic duo"; Console.WriteLine(duo.GetType()); //System.String //duo.BlowUp(); //A dynamic type does not know if this exist until run-time. Console.ReadLine(); To further illustrate this point, the dynamic type called “duo” calls a method that does not exist called BlowUp(). As you can see from the screenshot below, the compiler is reporting no errors even though BlowUp() does not exist. The program will compile fine. It will however throw a runtimebinder exception after it hits that line of code in runtime. Let’s try the same thing with a Var. This time, we get a compiler error that says BlowUp() does not exist. This program will not compile until we add a BlowUp() method.  I hope this helps with your understand of the two. If not, then drop me a line and I’ll be glad to answer it.

    Read the article

  • TXPAUSE : polite waiting for hardware transactional memory

    - by Dave
    Classic locks are an appropriate tool to prevent potentially conflicting operations A and B, invoked by different threads, from running at the same time. In a sense the locks cause either A to run before B or vice-versa. Similarly, we can replace the locks with hardware transactional memory, or use transactional lock elision to leverage potential disjoint access parallelism between A and B. But often we want A to wait until B has run. In a Pthreads environment we'd usually use locks in conjunction with condition variables to implement our "wait until" constraint. MONITOR-MWAIT is another way to wait for a memory location to change, but it only allows us to track one cache line and it's only available on x86. There's no similar "wait until" construct for hardware transactions. At the instruction-set level a simple way to express "wait until" in transactions would be to add a new TXPAUSE instruction that could be used within an active hardware transaction. TXPAUSE would politely stall the invoking thread, possibly surrendering or yielding compute resources, while at the same time continuing to track the transaction's address-set. Once a transaction has executed TXPAUSE it can only abort. Ideally that'd happen when some other thread modifies a variable that's in the transaction's read-set or write-set. And since we're aborting all writes would be discarded. In a sense this gives us multi-location MWAIT but with much more flexibility. We could also augment the TXPAUSE with a cycle-count bound to cap the time spent stalled. I should note that we can already enter a tight spin loop in a transaction to wait for updates to address-set to cause an abort. Assuming that the implementation monitors the address-set via cache-coherence probes, by waiting in this fashion we actually communicate via the probes, and not via memory values. That is the updating thread signals the waiter via probes instead of by traditional memory values. But TXPAUSE gives us a polite way to spin.

    Read the article

  • Adventures in Lab Management Configuration: CMMI Edition Part 1 of 3

    - by Enrique Lima
    I remember at one point someone telling me how close Migrate was to Migraine. This was a process that included an environment from TFS 2008 to TFS 2010, needed to be migrated too as far as the process template goes.  Here we are talking about CMMI v4.2 to CMMI v5.0.  Now, the process to migrate the TFS Infrastructure is one thing, migrating the Process Template is a different deal, not hard … just involved. Followed a combination of steps that came from a blog post as the main guidance and then MSDN (as suggested on the guidance post) to complement some tasks and steps. Again, the focus I have here is CMMI. The high level steps taken to enable the TFS 2008 CMMI v4.2 migrated to TFS 2010 Process Template are: 1)  Backup the Collection, Configuration and Warehouse Databases. 2)  Downloaded the Process Template using Visual Studio 2010. 3) Exported, modified and imported Bug Type Definition 4) Exported, modified and imported Scenario or Requirement Type Definition. 5) Created and imported bug field mappings. Now, we can attempt to connect using Test Manager, and you should be able to get this going. After that was done, it was time to enroll VMs that already existed in the environment.  This was a bit more challenging, but in the end it was a matter of just analyzing the changes that had been made to had a temporary work around from the time we migrated to the time we converted the Work Items and such and added fields to enable communication between the project and the Test and Lab Manager component. There are 2 more parts to this post, the second will describe the detailed steps taken to complete the Process Template update and the third will talk about the gotchas and fixes for the Lab Management portion.

    Read the article

  • JavaOne Countdown, Are you ready?

    - by Angela Caicedo
    This is a great time of the year!  Not only does the weather start cooling down a bit, but it's time to get ready for JavaOne 2012.  It feels so long since my last JavaOne (last year I missed it because I was on a mom duty), so this year I couldn't be happier to be this close to the action again.  Have you ever been at JavaOne?  There are a million great reasons to love JavaOne, and the most important for me is the atmosphere of the conference: The Java community is there, and Java is in the air! This year we have more than 450 sessions, and there are HOLs (Hands on labs) to get your hands dirty with code.  In addition, there will be very cool demos, an exhibition hall. and a DEMOground.  During the whole time, you will have the opportunity to interact with the speakers, discuss topics and concerns, and even have a drink! Oh yes, I almost forgot, there will be lots of fun even apart from the technology!  For example there will be a Geek Bike Ride, a Thirsty Bear party, and the Appreciation Party with Pearl Jam and Kings of Leon.  How can this get any better! So, are you ready yet?  Have you registered?  If not, just follow this "Register for JavaOne" link and we'll see you there! P.S.  Little known fact: If you are a student you can get your pass for free!!!

    Read the article

  • repeated entries in website log file

    - by Reza
    I am writing an ad hoc log analyser for my website log file. The following is part of the log file in which it shows file1.pdf has been downloaded twice. Looking carefully, the time stamp and IP address are exactly the same in both entries. How can it be possible to have 2 downloads at the same time by the same person. Should I count it as 2 in my programme or as 1? Any reply is appreciated. name_of_subdomain xxx.xxx.xx.xx - - [02/Apr/2012:09:13:31 +0100] "GET /file1.pdf HTTP/1.1" 206 3706 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; CMDTDF)" name_of_subdomain xxx.xxx.xx.xx - - [02/Apr/2012:09:13:31 +0100] "GET /file1.pdf HTTP/1.1" 206 425462 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; CMDTDF)"

    Read the article

  • Extreme Portability: OpenJDK 7 and GlassFish 3.1.1 on Power Mac G5!

    - by MarkH
    Occasionally you hear someone grumble about platform support for some portion or combination of the Java product "stack". As you're about to see, this really is not as much of a problem as you might think. Our friend John Yeary was able to pull off a pretty slick feat with his vintage Power Mac G5. In his words: Using a build script sent to me by Kurt Miller, build recommendations from Kelly O'Hair, and the great work of the BSD Port team... I created a new build of OpenJDK 7 for my PPC based system using the Zero VM. The results are fantastic. I can run GlassFish 3.1.1 along with all my enterprise applications. I recently had the opportunity to pick up an old G5 for little money and passed on it. What would I do with it? At the time, I didn't think it would be more than a space-consuming novelty. Turns out...I could have had some fun and a useful piece of hardware at the same time. Maybe it's time to go bargain-hunting again. For more information about repurposing classic Apple hardware and learning a few JDK-related tricks in the process, visit John's site for the full article, available here. All the best,Mark

    Read the article

  • Where have I been for the last month?

    - by MarkPearl
    So, I have been pretty quiet for the last month or so. True, it has been holiday time and I went to Cape Town for a stunning week of sunshine and blue skies, but the second I got back home I spent the remainder of my holiday on my pc viewing tutorials on www.tekpub.com Craig Shoemaker, who I got in contact with because of his podcast, sent me a 1 month free subscription to the site and it has been really appreciated. I have done a lot of WPF programming in the past, but not any asp.net stuff and so I used the time to get a peek at asp.net mvc2 as well as a bunch of other technologies. I just wished I had more spare time to do the rest of the videos. While I didn’t understand all of what was being shown on the asp.net stuff (it required previous asp.net expertise), the site was a really good jump start to someone wanting to learn a new technology and broaden the horizons and I would highly recommend it, My only gripe is that in South Africa we have limited bandwidth and bandwidth speeds and so I spent a lot of my monthly bandwidth on the site and had to top up with my ISP several times because of the high quality video captures that the site did. I would have preferred to download the video’s, but apparently that is only available to people who have the yearly subscription fee. Other than that, great site and thanks a ton Craig!

    Read the article

  • Silverlight, JavaScript and HTML 5 - Who wins?

    - by Sahil Malik
    SharePoint 2010 Training: more information   Disclaimer: These are just opinions. In the past I have expressed opinions about the future of technology, and have been ridiculously accurate. I have no idea if this will be accurate or not, but that is what it’s all about. Its opinions, predicting the future.   This topic has been boiling inside me for a while, and I have discussed it in private gettogethers with fellow minded techies. But I thought it would be a good idea to put this together as a blogpost. There is some debate about the future of Silverlight, especially in light of technologies such as newer faster browsers, and HTML 5. As a .NET developer, where do I invest my time and skills – remember you have limited time and skills, and not everything that comes out of Microsoft is a smashing success. So it is very very wise for you to consider the facts, macro trends, and allocate what you have limited amounts of – “time”. Read full article ....

    Read the article

  • PHP echo query result in Class??

    - by Jerry
    Hi all I have a question about PHP Class. I am trying to get the result from Mysql via PHP. I would like to know if the best practice is to display the result inside the Class or store the result and handle it in html. For example, display result inside the Class class Schedule { public $currentWeek; function teamQuery($currentWeek){ $this->currentWeek=$currentWeek; } function getSchedule(){ $connection = mysql_connect(DB_SERVER,DB_USER,DB_PASS); if (!$connection) { die("Database connection failed: " . mysql_error()); } $db_select = mysql_select_db(DB_NAME,$connection); if (!$db_select) { die("Database selection failed: " . mysql_error()); } $scheduleQuery=mysql_query("SELECT guest, home, time, winner, pickEnable FROM $this->currentWeek ORDER BY time", $connection); if (!$scheduleQuery){ die("database has errors: ".mysql_error()); } while($row=mysql_fetch_array($scheduleQuery, MYSQL_NUMS)){ //display the result..ex: echo $row['winner']; } mysql_close($scheduleQuery); //no returns } } Or return the query result as a variable and handle in php class Schedule { public $currentWeek; function teamQuery($currentWeek){ $this->currentWeek=$currentWeek; } function getSchedule(){ $connection = mysql_connect(DB_SERVER,DB_USER,DB_PASS); if (!$connection) { die("Database connection failed: " . mysql_error()); } $db_select = mysql_select_db(DB_NAME,$connection); if (!$db_select) { die("Database selection failed: " . mysql_error()); } $scheduleQuery=mysql_query("SELECT guest, home, time, winner, pickEnable FROM $this->currentWeek ORDER BY time", $connection); if (!$scheduleQuery){ die("database has errors: ".mysql_error()); // create an array } $ret = array(); while($row=mysql_fetch_array($scheduleQuery, MYSQL_NUMS)){ $ret[]=$row; } mysql_close($scheduleQuery); return $ret; // and handle the return value in php } } Two things here: I found that returned variable in php is a little bit complex to play with since it is two dimension array. I am not sure what the best practice is and would like to ask you experts opinions. Every time I create a new method, I have to recreate the $connection variable: see below $connection = mysql_connect(DB_SERVER,DB_USER,DB_PASS); if (!$connection) { die("Database connection failed: " . mysql_error()); } $db_select = mysql_select_db(DB_NAME,$connection); if (!$db_select) { die("Database selection failed: " . mysql_error()); } It seems like redundant to me. Can I only do it once instead of calling it anytime I need a query? I am new to php class. hope you guys can help me. thanks.

    Read the article

  • How to sync client and server at the first frame

    - by wheelinlight
    I'm making a game where an authoritative server sends information to all clients about states and positions for objects in a 3d world. The player can control his character by clicking on the screen to set a destination for the character, much like in the Diablo series. I've read most information I can find online about interpolation, reconciliation, and general networking architecture (Valve's for instance). I think I understand everything but one thing seems to be missing in every article I read. Let say we have an interpolation delay of 100ms, server tickrate=50ms, latency=200ms; How do I know when 100ms has past on the client? If the server sends the first update on t=0, can I assume it arrives at t=200, therefore assuming that all packets takes the same amount of time to reach the client? What if the first packet arrives a little quick, for instance at t=150. I would then be starting the client with t=150 and at t=250 it will think it has past 100ms since its connect to the server when it in fact only 50ms has past. Hopefully the above paragraph is understandable. The summarized question would be: How do I know at what tick to start simulating the client? EDIT: This is how I ended up doing it: The client keeps a clock (approximately) in sync with the server. The client then simulates the world at simulationTime = syncedTime - avg(RTT)/2 - interpolationTime The round-trip time can fluctuate so therefore I average it out over time. By only keeping the most recent values when calculating the average I hope to adapt to more permanent changes in latency. It's still to early to draw any conclusion. I'm currently simulating bad network connections, but it's looking good so far. Anyone see any possible problems?

    Read the article

  • Grub2 attempting to boot hd1 when it should boot hd0

    - by JoBu1324
    I'm attempting to perform a "normal" install on a USB3 SSD (I don't know if it is noteworthy, but I don't have a swap partition). The installation proceeds normally (I'm installing from a USB2 device I created using LiLi Boot, with a copy of Ubuntu 12.10 64bit that I downloaded directly from the source. The system I'm running Ubuntu on has had a more traditional installation of ubuntu running on it without issue (also 12.10), so I know that everything works A-OK when booting from a 7200RPM internal disk. There are a number of oddities that I've noticed so far, including graphics corruption, but the first and most pressing issue is that Grub2 refuses to recognize the correct hd. From /boot/grub/grub.cfg: if [ x$feature_default_font_path = xy ] ; then font=unicode else insmod part_msdos insmod ext2 set root='hd1,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd1,msdos1 --hint-efi=hd1,msdos1 --hint-baremetal=ahci1,msdos1 b58ee4f7-d41d-400a-b7b8-18bd1f0ae9d3 else search --no-floppy --fs-uuid --set=root b58ee4f7-d41d-400a-b7b8-18bd1f0ae9d3 fi font="/usr/share/grub/unicode.pf2" fi This is from a 100% fresh install of linux (first boot), which was installed while no hard drives were connected to the system, other than the USB2 LiLi drive. The system refuses to boot unless I change the hd1,msdos1 - hd0,msdos1 in the grub menu at boot, when it is the only disk device connected to the PC. What options are left for me to troubleshoot this issue? I've been racking my brains and taxing the internet trying to dig up something on this problem, but now I'd like to see if the Ubuntu community can rise to the challenge and help me fix this boot problem. This is the second time I've attempted this particular setup. The first time, after days of wasted time, I managed to get it to boot every other boot - i.e. every even boot it would boot into Ubuntu like it was happy; every odd boot it would boot into the BusyBox or Grub prompt. At one point it complained that it couldn't find /dev/disk/by-uuid/[the disk], which I found most perplexing, since the disk was there and booted before and after the occurrence (with intervention).

    Read the article

  • Should I avoid or embrace asking questions of other developers on the job?

    - by T.K.
    As a CS undergraduate, the people around me are either learning or are paid to teach me, but as a software developer, the people around me have tasks of their own. They aren't paid to teach me, and conversely, I am paid to contribute. When I first started working as a software developer co-op, I was introduced to a huge code base written in a language I had never used before. I had plenty of questions, but didn't want to bother my co-workers with all of them - it wasted their time and hurt my pride. Instead, I spent a lot of time bouncing between IDE and browser, trying to make sense of what had already been written and differentiate between expected behavior and symptoms of bugs. I'd ask my co-workers when I felt that the root of my lack of understanding was an in-house concept that I wouldn't find on the internet, but aside from that, I tried to confine my questions to lunch hours. Naturally, there were occasions where I wasted time trying to understand something in code on the internet that had, at its heart, an in-house concept, but overall, I felt I was productive enough during my first semester, contributing about as much as one could expect and gaining a pretty decent understanding of large parts of the product. I was wondering what senior developers felt about that mindset. Should new developers ask more questions to get to speed faster, or should they do their own research for themselves? I see benefits to both mindsets, and anticipate a large variety of responses, but I figure new developers might appreciate your answers without thinking to ask this question.

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >