Search Results

Search found 61036 results on 2442 pages for 'time keeping'.

Page 35/2442 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • SQLAuthority News – Great Time Spent at Great Indian Developers Summit 2014

    - by Pinal Dave
    The Great Indian Developer Conference (GIDS) is one of the most popular annual event held in Bangalore. This year GIDS is scheduled on April 22, 25. I will be presented total four sessions at this event and each session is very different from each other. Here are the details of four of my sessions, which I presented there. Pluralsight Shades This event was a great event and I had fantastic fun presenting a technology over here. I was indeed very excited that along with me, I had many of my friends presenting at the event as well. I want to thank all of you to attend my session and having standing room every single time. I have already sent resources in my newsletter. You can sign up for the newsletter over here. Indexing is an Art I was amazed with the crowd present in the sessions at GIDS. There was a great interest in the subject of SQL Server and Performance Tuning. Audience at GIDS I believe event like such provides a great platform to meet and share knowledge. Pinal at Pluralsight Booth Here are the abstract of the sessions which I had presented. They were recorded so at some point in time they will be available, but if you want the content of all the courses immediately, I suggest you check out my video courses on the same subject on Pluralsight. Indexes, the Unsung Hero Relevant Pluralsight Course Slow Running Queries are the most common problem that developers face while working with SQL Server. While it is easy to blame SQL Server for unsatisfactory performance, the issue often persists with the way queries have been written, and how Indexes has been set up. The session will focus on the ways of identifying problems that slow down SQL Server, and Indexing tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Indexes are the most crucial objects of the database. They are the first stop for any DBA and Developer when it is about performance tuning. There is a good side as well evil side to indexes. To master the art of performance tuning one has to understand the fundamentals of indexes and the best practices associated with the same. We will cover various aspects of Indexing such as Duplicate Index, Redundant Index, Missing Index as well as best practices around Indexes. SQL Server Performance Troubleshooting: Ancient Problems and Modern Solutions Relevant Pluralsight Course Many believe Performance Tuning and Troubleshooting is an art which has been lost in time. However, truth is that art has evolved with time and there are more tools and techniques to overcome ancient troublesome scenarios. There are three major resources that when bottlenecked creates performance problems: CPU, IO, and Memory. In this session we will focus on High CPU scenarios detection and their resolutions. If time permits we will cover other performance related tips and tricks. At the end of this session, attendees will have a clear idea as well as action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. We will discuss about performance tuning in this session with the help of Demos. Pinal Dave at GIDS MySQL Performance Tuning – Unexplored Territory Relevant Pluralsight Course Performance is one of the most essential aspects of any application. Everyone wants their server to perform optimally and at the best efficiency. However, not many people talk about MySQL and Performance Tuning as it is an extremely unexplored territory. In this session, we will talk about how we can tune MySQL Performance. We will also try and cover other performance related tips and tricks. At the end of this session, attendees will not only have a clear idea, but also carry home action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. You will also witness some impressive performance tuning demos in this session. Hidden Secrets and Gems of SQL Server We Bet You Never Knew Relevant Pluralsight Course SQL Trio Session! It really amazes us every time when someone says SQL Server is an easy tool to handle and work with. Microsoft has done an amazing work in making working with complex relational database a breeze for developers and administrators alike. Though it looks like child’s play for some, the realities are far away from this notion. The basics and fundamentals though are simple and uniform across databases, the behavior and understanding the nuts and bolts of SQL Server is something we need to master over a period of time. With a collective experience of more than 30+ years amongst the speakers on databases, we will try to take a unique tour of various aspects of SQL Server and bring to you life lessons learnt from working with SQL Server. We will share some of the trade secrets of performance, configuration, new features, tuning, behaviors, T-SQL practices, common pitfalls, productivity tips on tools and more. This is a highly demo filled session for practical use if you are a SQL Server developer or an Administrator. The speakers will be able to stump you and give you answers on almost everything inside the Relational database called SQL Server. I personally attended the session of Vinod Kumar, Balmukund Lakhani, Abhishek Kumar and my favorite Govind Kanshi. Summary If you have missed this event here are two action items 1) Sign up for Resource Newsletter 2) Watch my video courses on Pluralsight Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL Tagged: GIDS

    Read the article

  • How to Achieve Real-Time Data Protection and Availabilty....For Real

    - by JoeMeeks
    There is a class of business and mission critical applications where downtime or data loss have substantial negative impact on revenue, customer service, reputation, cost, etc. Because the Oracle Database is used extensively to provide reliable performance and availability for this class of application, it also provides an integrated set of capabilities for real-time data protection and availability. Active Data Guard, depicted in the figure below, is the cornerstone for accomplishing these objectives because it provides the absolute best real-time data protection and availability for the Oracle Database. This is a bold statement, but it is supported by the facts. It isn’t so much that alternative solutions are bad, it’s just that their architectures prevent them from achieving the same levels of data protection, availability, simplicity, and asset utilization provided by Active Data Guard. Let’s explore further. Backups are the most popular method used to protect data and are an essential best practice for every database. Not surprisingly, Oracle Recovery Manager (RMAN) is one of the most commonly used features of the Oracle Database. But comparing Active Data Guard to backups is like comparing apples to motorcycles. Active Data Guard uses a hot (open read-only), synchronized copy of the production database to provide real-time data protection and HA. In contrast, a restore from backup takes time and often has many moving parts - people, processes, software and systems – that can create a level of uncertainty during an outage that critical applications can’t afford. This is why backups play a secondary role for your most critical databases by complementing real-time solutions that can provide both data protection and availability. Before Data Guard, enterprises used storage remote-mirroring for real-time data protection and availability. Remote-mirroring is a sophisticated storage technology promoted as a generic infrastructure solution that makes a simple promise – whatever is written to a primary volume will also be written to the mirrored volume at a remote site. Keeping this promise is also what causes data loss and downtime when the data written to primary volumes is corrupt – the same corruption is faithfully mirrored to the remote volume making both copies unusable. This happens because remote-mirroring is a generic process. It has no  intrinsic knowledge of Oracle data structures to enable advanced protection, nor can it perform independent Oracle validation BEFORE changes are applied to the remote copy. There is also nothing to prevent human error (e.g. a storage admin accidentally deleting critical files) from also impacting the remote mirrored copy. Remote-mirroring tricks users by creating a false impression that there are two separate copies of the Oracle Database. In truth; while remote-mirroring maintains two copies of the data on different volumes, both are part of a single closely coupled system. Not only will remote-mirroring propagate corruptions and administrative errors, but the changes applied to the mirrored volume are a result of the same Oracle code path that applied the change to the source volume. There is no isolation, either from a storage mirroring perspective or from an Oracle software perspective.  Bottom line, storage remote-mirroring lacks both the smarts and isolation level necessary to provide true data protection. Active Data Guard offers much more than storage remote-mirroring when your objective is protecting your enterprise from downtime and data loss. Like remote-mirroring, an Active Data Guard replica is an exact block for block copy of the primary. Unlike remote-mirroring, an Active Data Guard replica is NOT a tightly coupled copy of the source volumes - it is a completely independent Oracle Database. Active Data Guard’s inherent knowledge of Oracle data block and redo structures enables a separate Oracle Database using a different Oracle code path than the primary to use the full complement of Oracle data validation methods before changes are applied to the synchronized copy. These include: physical check sum, logical intra-block checking, lost write validation, and automatic block repair. The figure below illustrates the stark difference between the knowledge that remote-mirroring can discern from an Oracle data block and what Active Data Guard can discern. An Active Data Guard standby also provides a range of additional services enabled by the fact that it is a running Oracle Database - not just a mirrored copy of data files. An Active Data Guard standby database can be open read-only while it is synchronizing with the primary. This enables read-only workloads to be offloaded from the primary system and run on the active standby - boosting performance by utilizing all assets. An Active Data Guard standby can also be used to implement many types of system and database maintenance in rolling fashion. Maintenance and upgrades are first implemented on the standby while production runs unaffected at the primary. After the primary and standby are synchronized and all changes have been validated, the production workload is quickly switched to the standby. The only downtime is the time required for user connections to transfer from one system to the next. These capabilities further expand the expectations of availability offered by a data protection solution beyond what is possible to do using storage remote-mirroring. So don’t be fooled by appearances.  Storage remote-mirroring and Active Data Guard replication may look similar on the surface - but the devil is in the details. Only Active Data Guard has the smarts, the isolation, and the simplicity, to provide the best data protection and availability for the Oracle Database. Stay tuned for future blog posts that dive into the many differences between storage remote-mirroring and Active Data Guard along the dimensions of data protection, data availability, cost, asset utilization and return on investment. For additional information on Active Data Guard, see: Active Data Guard Technical White Paper Active Data Guard vs Storage Remote-Mirroring Active Data Guard Home Page on the Oracle Technology Network

    Read the article

  • Temporary storage for keeping data between program iterations?

    - by mr.b
    I am working on an application that works like this: It fetches data from many sources, resulting in pool of about 500,000-1,500,000 records (depends on time/day) Data is parsed Part of data is processed in a way to compare it to pre-existing data (read from database), calculations are made, and stored in database. Resulting dataset that has to be stored in database is, however, much smaller in size (compared to original data set), and ranges from 5,000-50,000 records. This process almost always updates existing data, perhaps adds few more records. Then, data from step 2 should be kept somehow, somewhere, so that next time data is fetched, there is a data set which can be used to perform calculations, without touching pre-existing data in database. I should point out that this data can be lost, it's not irreplaceable (key information can be read from database if needed), but it would speed up the process next time. Application components can (and will be) run off different computers (in the same network), so storage has to be reachable from multiple hosts. I have considered using memcached, but I'm not quite sure should I do so, because one record is usually no smaller than 200 bytes, and if I have 1,500,000 records, I guess that it would amount to over 300 MB of memcached cache... But that doesn't seem scalable to me - what if data was 5x that amount? If it were to consume 1-2 GB of cache only to keep data in between iterations (which could easily happen)? So, the question is: which temporary storage mechanism would be most suitable for this kind of processing? I haven't considered using mysql temporary tables, as I'm not sure if they can persist between sessions, and be used by other hosts in network... Any other suggestion? Something I should consider?

    Read the article

  • Right-Time Retail Part 2

    - by David Dorf
    This is part two of the three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Integration Of course these real-time enabling technologies are only as good as the systems that utilize them, and it only takes one bottleneck to slow everyone else down. What good is an immediate stock-out notification if the supply chain can’t react until tomorrow? Since being formed in 2006, Oracle Retail has been not only adding more integrations between systems, but also modernizing integrations for appropriate speed. Notice I tossed in the word “appropriate.” Not everything needs to be real-time – again, we’re talking about Right-Time Retail. The speed of data capture, analysis, and execution must be synchronized or you’re wasting effort. Unfortunately, there isn’t an enterprise-wide dial that you can crank-up for your estate. You’ll need to improve things piecemeal, with people and processes as limiting factors while choosing the appropriate types of integrations. There are three integration styles we see in the retail industry. First is batch. I know, the word “batch” just sounds slow, but this pattern is less about velocity and more about volume. When there are large amounts of data to be moved, you’ll want to use batch processes. Our technology of choice here is Oracle Data Integrator (ODI), which provides a fast version of Extract-Transform-Load (ETL). Instead of the three-step process, the load and transform steps are combined to save time. ODI is a key technology for moving data into Retail Analytics where we can apply science. Performing analytics on each sale as it occurs doesn’t make any sense, so we batch up a statistically significant amount and submit all at once. The second style is fire-and-forget. For some types of data, we want the data to arrive ASAP but immediacy is not necessary. Speed is less important than guaranteed delivery, so we use message-oriented middleware available in both Weblogic and the Oracle database. For example, Point-of-Service transactions are queued for delivery to Central Office at corporate. If the network is offline, those transactions remain in the queue and will be delivered when the network returns. Transactions cannot be lost and they must be delivered in order. (Ever tried processing a return before the sale?) To enhance the standard queues, we offer the Retail Integration Bus (RIB) to help the management and monitoring of fire-and-forget messaging in the enterprise. The third style is request-response and is most commonly implemented as Web services. This is a synchronous message where the sender waits for a response. In this situation, the volume of data is small, guaranteed delivery is not necessary, but speed is very important. Examples include the website checking inventory, a price lookup, or processing a credit card authorization. The Oracle Service Bus (OSB) typically handles the routing of such messages, and we’ve enhanced its abilities with the Retail Service Backbone (RSB). To better understand these integration patterns and where they apply within the retail enterprise, we’re providing the Retail Reference Library (RRL) at no charge to Oracle Retail customers. The library is composed of a large number of industry business processes, including those necessary to support Commerce Anywhere, as well as detailed architectural diagrams. These diagrams allow implementers to understand the systems involved in integrations and the specific data payloads. Furthermore, with our upcoming release we’ll be providing a new tool called the Retail Integration Console (RIC) that allows IT to monitor and manage integrations from a single point. Using RIC, retailers can quickly discern where integration activity is occurring, volume statistics, average response times, and errors. The dashboards provide the ability to dive down into the architecture documentation to gather information all the way down to the specific payload. Retailers that want real-time integrations will also need real-time monitoring of those integrations to ensure service-level agreements are maintained. Part 3 looks at marketing.

    Read the article

  • How to force VS to react on a changing of an attached property in design time?

    - by sedovav
    Imagine, we have a wpf class library with a window1.xaml and a resource dictionary res.xaml defined in it. I know how to use styles that defined in the res.xaml for the controls that defined into the window: <Window x:Class="...Window1"> <Window.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="res.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> <\Window.Resources> </Window> So we can use the dictionary's styles for all elements into the window (except the window element... I don't know how to set the style from the res.xaml for the window :( ). I saw the article where describes how to create and use attached property to add resource dictionaries to a FrameworkElement.Resources.MergedDictionaries list. It's good! We can do the same as we done in the example above but we can use the window style now. It looks like this: <Window x:Class="...Window1" xmlns: resources="..." resources:SharedResources.MergedDictionaries="res.xaml"> </Window> That's good but VS2008 cannot recognize resources from res.xaml in design time. So we have a sad situation: all styles from res.xaml are available in run-time but in the design-time VS cannot display the window (it can't find the mentioned styles). Does anybody know how to fix this situation?

    Read the article

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

  • Coming up with manageable game ideas as a hobbyist game developer

    - by Kragen
    I'm trying to come up with ideas for games to develop - as per the advice on this question I've started jotting down and brainstorming my ideas as I get them, and it has worked relatively well - I now have a growing collection of ideas that I think are relatively original. The trouble is that I'm a solo hobbyist developer so my time is limited (and I have short attention span!) I've decided to set myself a limit of 1 working week (i.e. 35-40 hours) to develop / prototype my game, but all of the ideas that really spark my imagination are far too complex to be achievable in that sort of time (e.g. RTS or RPG style gameplay), and none of my simpler ideas really strike me as being that good (and whenever I get a flash of inspiration I invariably end up making things more complicated!) Am I being too picky - should I just take one of my simpler ideas and have a go?

    Read the article

  • Date Time Format in RUBY

    - by Madhan ayyasamy
    The following snippets is very useful when we render views dates in various format in ruby on rails."Format meaning:  %a - The abbreviated weekday name (``Sun'')  %A - The  full  weekday  name (``Sunday'')  %b - The abbreviated month name (``Jan'')  %B - The  full  month  name (``January'')  %c - The preferred local date and time representation  %d - Day of the month (01..31)  %H - Hour of the day, 24-hour clock (00..23)  %I - Hour of the day, 12-hour clock (01..12)  %j - Day of the year (001..366)  %m - Month of the year (01..12)  %M - Minute of the hour (00..59)  %p - Meridian indicator (``AM''  or  ``PM'')  %S - Second of the minute (00..60)  %U - Week  number  of the current year,          starting with the first Sunday as the first          day of the first week (00..53)  %W - Week  number  of the current year,          starting with the first Monday as the first          day of the first week (00..53)  %w - Day of the week (Sunday is 0, 0..6)  %x - Preferred representation for the date alone, no time  %X - Preferred representation for the time alone, no date  %y - Year without a century (00..99)  %Y - Year with century  %Z - Time zone name  %% - Literal ``%'' character   t = Time.now   t.strftime("Printed on %m/%d/%Y")   #=> "Printed on 04/09/2003"   t.strftime("at %I:%M%p")            #=> "at 08:56AM""Have a great day!

    Read the article

  • The Case of the Missing Date/Time Stamp: Reporting Services 2008 R2 Snapshots

    - by smisner
    This week I stumbled upon an undocumented “feature” in SQL Server 2008 R2 Reporting Services as I was preparing a demonstration on how to set up and use report snapshots. If you’re familiar with the main changes in this latest release of Reporting Services, you probably already know that Report Manager got a facelift this time around. Although this facelift was generally a good thing, one of the casualties – in my opinion – is the loss of the snapshot label that served two purposes… First, it flagged the report as a snapshot. Second, it let you know when that snapshot was created. As part of my standard operating procedure when demonstrating report snapshots, I point out this label, so I was rather taken aback when I didn’t see it in the demonstration I was preparing. It sort of upset my routine, and I’m rather partial to my routines. I thought perhaps I wasn’t looking in the right place and changed Report Manager from Tile View to Detail View, but no – that label was still missing. In the grand scheme of life, it’s not an earth-shattering change, but you’ll have to look at the Modified Date in Details View to know when the snapshot was run. Or hope that the report developer included a textbox to show the execution time in the report. (Hint: this is a good time to add this to your list of report development best practices, whether a report gets set up as a report snapshot or not!) A snapshot from the past In case you don’t remember how a snapshot appeared in Report Manager back in the old days (of SQL Server 2008 and earlier), here’s an image I snagged from my Reporting Services 2008 Step by Step manuscript: A snapshot in the present A report server running in SharePoint integrated mode had no such label. There you had to rely on the Report Modified date-time stamp to know the snapshot execution time. So I guess all platforms are now consistent. Here’s a screenshot of Report Manager in the 2008 R2 version. One of these is a snapshot and the rest execute on demand. Can you tell which is the snapshot? Consider descriptions as an alternative So my report snapshot demonstration has one less step, and I’ll need to edit the Denali version of the Step by Step book. Things are simpler this way, but I sure wish we had an easier way to identify the execution methods of the reports. Consider using the description field to alert users that the report is a snapshot. It might save you a few questions about why the data isn’t up-to-date if the users know that something changed in the source of the report. Notice that the full description doesn’t display in Tile View, so keep it short and sweet or instruct users to open Details View to see the entire description.

    Read the article

  • Battery icon constantly empty when discharging Vaio VGN-FZ210CE

    - by Alex
    I have a Sony Vaio VGN-210CE laptop, and its battery does not report the drain rate properly. In 11.10 and earlier, the battery icon would update with the percentage, not the estimated time. In 12.04 and 12.10, it estimates by time, which is always some very low value because the estimated drain rate is 700W. I currently have 89% in my battery but the icon is red and empty. If there is an application I should install, or a setting I should change, please let me know. Thanks!

    Read the article

  • Agile Tools For Handling Multiple Projects

    - by f1dave
    Currently I'm leading our agile team in an iteration manager role as well as doing my regular dev work. One of the difficulties I'm facing as an IM is tracking burn-down/burn-up; not because I can't produce graphs, but because there's multiple projects that this team is working on at one time. At present I have an excel workbook with sheets that contain a whole bunch of graphs, both at an overall team and by-project level. It's clunky and I spend more time tweaking formulas and double checking calculations than I'd really like. As such, I'm interested to know if anyone has used a tool that can effectively produce these sorts of reports, burn-downs, and predictions across multiple projects. I've seen http://www.pivotaltracker.com/ do some nice things, and of course there's JIRA/Greenhopper, but I'm not aware of those being used to track the progress of multiple projects within one team. If anyone's got an idea of some tools, or has faced a similar problem before, I'd love to hear from you.

    Read the article

  • Variable-step update() in game loop is falling behind, how can I get around this?

    - by ThatsGobbles
    I'm working on a minimal game engine for my next game. I'm using the delta update method like shown: void update(double delta) { // Update code that uses `delta` goes here } I have a deep hierarchy of updatable objects, with a root updatable that contains several updatables, each of which contains more updatables, etc. Normally I'd just iterate through each of the root's children and update each one, which would then do the same for its children, and so on. However, passing a fixed value of delta to the root means that by the time the leaf updatables are reached, it's been longer since delta seconds that have elapsed. This is causing noticable desyncing in my game, and time synchronization is very important in my case (I'm working on a rhythm game). Any ideas on how I should tackle this? I've considered using StopWatches and a global readable timer, but any advice would be helpful. I'm also open to moving to fixed timesteps as opposed to variable.

    Read the article

  • How many hours do you spend programming outside of work if you are married (with children) [closed]

    - by eterps
    I recently attended a conference and met some insanely talented developers with mile-long lists of accomplishments. One puzzling thing I noticed about many of these wildly successful people is that a lot of their output (books, blogs, games, mobile apps) are done as hobby projects outside of their normal jobs. Not only this, but most of them were married, and some even with children. I'm absolutely baffled by this. My question is targeted to folks that are married, working full time, and also spend time outside of work on hobby projects or extra programming. The question is: How many hours do you put in programming outside of your normal job, and how do you balance it with your family life?

    Read the article

  • Practices for keeping JavaScript and CSS in sync?

    - by Rene Saarsoo
    I'm working on a large JavaScript-heavy app. Several pieces of JavaScript have some related CSS rules. Our current practice is for each JavaScript file to have an optional related CSS file, like so: MyComponent.js // Adds CSS class "my-comp" to div MyComponent.css // Defines .my-comp { color: green } This way I know that all CSS related to MyComponent.js will be in MyComponent.css. But the thing is, I all too often have very little CSS in those files. And all too often I feel that it's too much effort to create a whole file to just contain few lines of CSS - it would be easier to just hardcode the styles inside JavaScript. But this would be the path to the dark side... Lately I've been thinking of embedding the CSS directly inside JavaScript - so it could still be extracted in the build process and merged into one large CSS file. This way I wouldn't have to create a new file for every little CSS-piece. Additionally when I move/rename/delete the JavaScript file I don't have to additionally move/rename/delete the CSS file. But how to embed CSS inside JavaScript? In most other languages I would just use string, but JavaScript has some issues with multiline strings. The following looks IMHO quite ugly: Page.addCSS("\ .my-comp > p {\ font-weight: bold;\ color: green;\ }\ "); What other practices have you for keeping your JavaScript and CSS in sync?

    Read the article

  • What to do when ServerSocket throws IOException and keeping server running

    - by s5804
    Basically I want to create a rock solid server. while (keepRunning.get()) { try { Socket clientSocket = serverSocket.accept(); ... spawn a new thread to handle the client ... } catch (IOException e) { e.printStackTrace(); // NOW WHAT? } } In the IOException block, what to do? Is the Server socket at fault so it need to be recreated? For example wait a few seconds and then serverSocket = ServerSocketFactory.getDefault().createServerSocket(MY_PORT); However if the server socket is still OK, then it is a pity to close it and kill all previously accepted connections that are still communicating. EDIT: After some answers, here my attempt to deal with the IOException. Would the implementation be guaranteeing keeping the server up and only re-create server socket when only necessary? while (keepRunning.get()) { try { Socket clientSocket = serverSocket.accept(); ... spawn a new thread to handle the client ... bindExceptionCounter = 0; } catch (IOException e) { e.printStackTrace(); recreateServerSocket(); } } private void recreateServerSocket() { while (keepRunning) { try { logger.info("Try to re-create Server Socket"); ServerSocket socket = ServerSocketFactory.getDefault().createServerSocket(RateTableServer.RATE_EVENT_SERVER_PORT); // No exception thrown, then use the new socket. serverSocket = socket; break; } catch (BindException e) { logger.info("BindException indicates that the server socket is still good.", e); bindExceptionCounter++; if (bindExceptionCounter < 5) { break; } } catch (IOException e) { logger.warn("Problem to re-create Server Socket", e); e.printStackTrace(); try { Thread.sleep(30000); } catch (InterruptedException ie) { logger.warn(ie); } } } }

    Read the article

  • Good tools for keeping the content in test/staging/live environments synchronized

    - by David Stratton
    I'm looking for recommendations on automated folder synchronization tools to keep the content in our three environments synchronized automatically. Specifically, we have several applications where a user can upload content (via a File Upload page or a similar mechanism), such as images, pdf files, word documents, etc. In the past, we had the user doing this to our live server, and as a result, our test and staging servers had to be manually synchronized. Going forward, we will have them upload content to the staging server, and we would like some software to automatically copy the files off to the test and live servers EITHER on a scheduled basis OR as the files get uploaded. I was planning on writing my own component, and either set it up as a scheduled task, or use a FileSystemWatcher, but it occurred to me that this has probably already been done, and I might be better off with some sort of synchronization tool that already exists. On our web site, there are a limited number of folders that we want to keep synchronized. In these folders, it is an all or nothing - we want to make sure the folders are EXACT duplicates. This should make it fairly straightforward, and I would think that any software that can synchronize folders would be OK, except that we also would like the software to log changes. (This rules out simple BATCH files.) So I'm curious, if you have a similar environment, how did you solve the challenge of keeping everything synchronized. Are you aware of a tool that is reliable, and will meet our needs? If not, do you have a recommendation for something that will come close, or better yet, an open source solution where we can get the code and modify it as needed? (preferably .NET). Added Also, I DID google this first, but there are so many options, I am interested mostly in knowing what actually works well vs what they SAY works, which is why I'm asking here.

    Read the article

  • Keeping updated with the latest technologies.

    - by Prashanth
    I am a software developer and I have been programming since the past six years. I simply love the mental challenge involved in trying to come up with solutions to hard problems, reading up programming literature, blogs by prominent developers and so on. I work on Microsoft platform and I have trouble keeping up with the pace at which various frameworks are rolled out. Remoting,WCF,ASP.NET,ASP.NET MVC, LINQ, WPF, WWF, OSLO, ADO.NET data services, DSL tools etc etc. Even understanding all these frameworks at an abstract level and see how they are all tied up with MS vision itself is a major hurdle. Now when you add other non microsoft technologies, programming languages etc to the equation, I wonder how do people manage? Given that there are only 24 hours in a day, how does one keep himself updated about so many technology changes that happen everyday? My question is , is it even worth doing that? The thing is, I am also interested in other fields such as literature, science. I try my best to at least gain a superficial understanding of what is happening in other fields of my interest and don't want to give up on that :)

    Read the article

  • How to reduce a data frame keeping the order for other columns

    - by betabandido
    I am trying to reduce a data frame using the max function on a given column. I would like to preserve other columns but keeping the values from the same rows where each maximum value was selected. An example will make this explanation easier. Let us assume we have the following data frame: dframe <- data.frame(list(BENCH=sort(rep(letters[1:4], 4)), CFG=rep(1:4, 4), VALUE=runif(4 * 4) )) This gives me: BENCH CFG VALUE 1 a 1 0.98828096 2 a 2 0.19630597 3 a 3 0.83539540 4 a 4 0.90988296 5 b 1 0.01191147 6 b 2 0.35164194 7 b 3 0.55094787 8 b 4 0.20744004 9 c 1 0.49864470 10 c 2 0.77845408 11 c 3 0.25278871 12 c 4 0.23440847 13 d 1 0.29795494 14 d 2 0.91766057 15 d 3 0.68044728 16 d 4 0.18448748 Now, I want to reduce the data in order to select the maximum VALUE for each different BENCH: aggregate(VALUE ~ BENCH, dframe, FUN=max) This gives me the expected result: BENCH VALUE 1 a 0.9882810 2 b 0.5509479 3 c 0.7784541 4 d 0.9176606 Next, I tried to preserve other columns: aggregate(cbind(VALUE, CFG) ~ BENCH, dframe, FUN=max) This reduction returns: BENCH VALUE CFG 1 a 0.9882810 4 2 b 0.5509479 4 3 c 0.7784541 4 4 d 0.9176606 4 Both VALUE and CFG are reduced using max function. But this is not what I want. For instance, in this example I would like to obtain: BENCH VALUE CFG 1 a 0.9882810 1 2 b 0.5509479 3 3 c 0.7784541 2 4 d 0.9176606 2 where CFG is not reduced, but it just keeps the value associated to the maximum VALUE for each different BENCH. How could I change my reduction in order to obtain the last result shown?

    Read the article

  • Keeping a web app project organized?

    - by user246114
    Hi, I'm writing a web app, using jsp to create the page content. I need a pretty good amount of javascript to make the app work. Does anyone have any recommendations on how to structure my project, such that it doesn't become a mess? This is a broad question, but the basic problem is that I'm insert javascript code directly into my jsp content. Then I might have some external js files. Ids and such are strewn between multiple files. I'm not really sure what a best practice is for keeping this type of project organized. Do you always keep your javascript in separate files? There has to be a few hooks in the jsp pages though for them, right? I tried using GWT because I'm really a c/java developer, and I was hoping it would help keep my project more organized (definitely helps) - but GWT is a pain to use with jsp, it really wants you to do all UI generation client side after the page is done loading, doesn't work for what I need to do. Again, broad question, any tips would be great, Thanks

    Read the article

  • How are these numbers converted to a readable Date/Time string?

    - by duckwizzle
    I have 2 XML files I'm reading - one has a date/time attribute that's readable (ex. May 1, 2010 12:03:14 AM) and the other... not so much (ex. 1272686594492). Both files have the complicated date/time format, but only the newer one has the readable version. I cannot figure out how to make the complicated version readable. Any ideas? The numbers are in the pastbin below. http://pastebin.com/HMLEAGhf Thanks!

    Read the article

  • Keeping DB Table sorted using multi-field formula (Microsoft SQL)

    - by user298167
    Hello Everybody. I have a Job Table which has two interesting columns: Creation Date and Importance (high - 3, medium 2, low - 1). Job's priority calculated like this: Priority = Importance * (time passed since creation). The problem is, Every time I would like to pick 200 jobs with highest priority, I dont want to resort the table. Is there a way to keep rows sorted? I was also thinking about having three tables one for High, Medium and Low and then sort those by Creation Date. Thanks

    Read the article

  • Is it possible to cast the Elapsed Time function to Integer?

    - by nuvio
    I have the following function: (def elapsedtime (with-out-str (time (run-my-function)))) and I was wondering if is possible to store only the integer value of the time, as I can only store a String at the moment.... Any suggestion? Thanks a lot UPDATE So I did use this: (defmacro nsecs [expr] `(let [start# (. System (nanoTime))] ~expr (- (. System (nanoTime)) start#))) And then modified this: (def elapsedtime (nsecs (run-my-function argument1 argument2))) but doesn't work, what am I doing wrong? "Exception in thread "AWT-EventQueue-0" java.lang.IllegalArgumentException: Wrong number of args (1) passed to: main$fn--105$nsecs"

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >