Search Results

Search found 20869 results on 835 pages for 'things i hate'.

Page 363/835 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • Virtualised Sharepoint Backup Strategies

    - by dunxd
    I have a Sharepoint (OSS 2007) farm running on three virtual machines in VMWare ESX, plus a SQL Server backend on physical hardware. During a recent Business Continuity Planning event I tried restoring the sharepoint farm with only the config and content databases, and failed to get things working. My plan was to build a new sharepoint server, then attach this to a restoration config database and install the Central Management site on this server, then reattach the content databases. This failed at the Central Management part of the plan. So I am back to the drawing board on the best strategy for backup and recovery, with reducing the time and complexity of the restore job the main objective. I haven't been able to find much in the way of discussion of backup/restore strategies for Sharepoint in a VMWare environment, so I figured I'd see if anyone on server fault has any ideas or experience.

    Read the article

  • zip being too nice (Mac OS X)

    - by stib
    I use zip to do a regular backup of a local directory onto a remote machine. They don't believe in things like rsync here, so it's the best I can do (?). Here's the script I use echo $(date)>>~/backuplog.txt; if [[ -e /Volumes/backup/ ]]; then cd /Volumes/Non-RAID_Storage/; for file in projects/*; do nice -n 10 zip -vru9 /Volumes/backup/nonRaidStorage.backup.zip "$file" 2>&1 | grep -v "zip info: local extra (21 bytes)">>~/backuplog.txt; done; else echo "backup volume not mounted">>~/backuplog.txt; fi This all works fine, except that zip never uses much CPU, so it seems to be taking longer than it should. It never seems to get above 5%. I tried making it nice -20 but that didn't make any difference. Is it just the network or disc speeds bottlenecking the process or am I doing something wrong?

    Read the article

  • Styles of games that work at low-resolution

    - by Brendan Long
    I'm taking a class on compilers, and the goal is to write a compiler for Meggy Jr devices (Arduino). The goal is just to make a simple compilers with loops and variables and stuff. Obviously, that's lame, so the "real goal" is to make an impressive game on the device. The problem is that it only has 64 pixels to work with (technically 72, but the top 8 are single-color and not part of the main display, so they're really only useful for displaying things like money). My problem is thinking of something to do on a device that small. It doesn't really matter if it's original, but it can't be something that's already available. My first idea was "snake", but that comes with the SDK. Same with a side-scrolling shooter. Remaining ideas include a tower defense game (hard to write, hard to control), an RPG (same), tetris (lame).. The problem is that all of the games I like require a high-resolution screen because they have a lot of text. Even a really simple game like nethack would be hard because each creature would be a single color. tl;dr What styles of games require a. No text; and b. Few enough objects that representing them each with a single color is acceptable? EDIT: To clarify, the display is 8x8 for a total of 64 pixels, not 64x64.

    Read the article

  • Remote Desktop not following display settings

    - by John
    I have my RDP client set up to use highest settings for connecting to another PC on my LAN, which has display settings 1280x1024x32bit. RDP is specifically set to use 32bit depth, but when I connect it drops to 16bit. The PC I connect to is (amongst other things) used to do some 3D graphics. I don't expect great performance, just to check it works... but it doesn't over RDP, the 3D app doesn't think the hardware is the same. Does RDP's integration with Windows mean it is providing some virtualised rendering system? Should I use something less 'clever' like VNC, to literally screen-grab the contents of the screen without altering the settings?

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • What are the best practices for phasing out obsolete code?

    - by P.Brian.Mackey
    I have the need to phase out an obsolete method. I am aware of the [Obsolete] attribute. Does Microsoft have a recommended best practice guide for doing this? Here's my current plan: A. I do not want to create a new assembly because developers would have to add a new reference to their projects and I expect to get a lot of grief from my boss and co-workers if they must do this. We also do not maintain multiple assembly versions. We only use the latest version. Changing this practice would require changing our deployment process which is a big issue (have to teach people how to do things with TFS instead of FinalBuilder and get them to give up FinalBuilder) B. Mark the old method obsolete. C. Because the implementation is changing (not the method signature), I need to rename the method rather than create an overload. So, to make users aware of the proper method I plan to add a message to the [Obsolete] attribute. This part bothers me, because the only change I'm making is decoupling the method from the connection string. But, because I'm not adding a new assembly, I see no way around this. Result: [Obsolete("Please don't use this anymore because it does not implement IMyDbProvider. Use XXX instead.")]; /// <summary> /// /// </summary> /// <param name="settingName"></param> /// <returns></returns> public static Dictionary<string, Setting> ReadSettings(string settingName) { return ReadSettings(settingName, SomeGeneralClass.ConnectionString); } public Dictionary<string, Setting> ReadSettings2(string settingName) { return ReadSettings(settingName);// IMyDbProvider.ConnectionString private member added to class. Probably have to make this an instance method. }

    Read the article

  • Run database checks but omit large tables or filegroups - New option in Ola Hallengren's Scripts

    - by Greg Low
    One of the things I've always wanted in DBCC CHECKDB is the option to omit particular tables from the check. The situation that I often see is that companies with large databases often have only one or two very large tables. They want to run a DBCC CHECKDB on the database to check everything except those couple of tables due to time constraints. I posted a request on the Connect site about time some time ago: https://connect.microsoft.com/SQLServer/feedback/details/611164/dbcc-checkdb-omit-tables-option The workaround from the product team was that you could script out the checks that you did want to carry out, rather than omitting the ones that you didn't. I didn't overly like this as a workaround as clients often had a very large number of objects that they did want to check and only one or two that they didn't. I've always been impressed with the work that our buddy Ola Hallengren has done on his maintenance scripts. He pinged me recently about my old Connect item and said he was going to implement something similar. The good news is that it's available now. Here are some examples he provided of the newly-supported syntax: EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKDB' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'ALL_OBJECTS,-AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'AdventureWorks.PRIMARY' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'ALL_FILEGROUPS,-AdventureWorks.PRIMARY' Note the syntax to omit an object from the list of objects and the option to omit one filegroup. Nice! Thanks Ola! You'll find details here: http://ola.hallengren.com/  

    Read the article

  • Which tool do you use for tracking daily/weekly progress on your long-term goals?

    - by NoCatharsis
    I read through this post and this post to get an idea of short-term task management applications out there. However, now I'm having a problem tracking my longer-term goals pertaining to things like career, education, and exercise. I just discovered Disciplanner which might meet my needs but haven't had time to set it up just yet. Not to mention, it took me about 2 weeks of playing with every online to-do list app I could find before I came to a decision on which to use. I'm very particular about my tasks and goal-setting, so I would prefer to get some advice from other superusers before diving into the web again to find another life-management website.

    Read the article

  • Oracle and Cavium to work together on Java SE 8 on 64-bit ARMv8

    - by Henrik Stahl
    We have been working for some time on a standard Oracle JDK 8 port to the upcoming introduction of 64-bit servers based on the new ARMv8 micro architecture. At ARM TechCon 2013 in Santa Clara, California, we announced a roadmap with an expected GA in 2015. This project is going very well and is ahead of schedule. We will soon be at the point where we will make binaries available outside of Oracle - first in a managed beta program with select customers/partners, and sometime during the fall of 2014 as a public early access program. Unless something changes, we are looking at a early 2015 GA. We should be able to share a detailed ramp down and GA plan by JavaOne 2014. One of the things we (obviously) need to produce a high-quality port is hardware for development and QA. We are therefore happy to announce that we will be collaborating with Cavium on this project. Cavium has been a supporter of the Java ecosystem for a long time and we have numerous joint customers running various Java versions on Cavium MIPS and ARM-based hardware. Cavium has now agreed to provide us with development hardware and engineering resources so that we can certify and optimize the initial Oracle JDK 8 release on Cavium's ThunderX hardware. This is expected to improve quality and performance of JDK 8 on ARMv8 in general, as well as on Cavium's hardware. For more information: Cavium announcement on the ThunderX product family Cavium announcement on Oracle collaboration As a reminder, we plan to release the Oracle JDK 8 port to 64-bit ARMv8 under the royalty-free (for general purpose servers etc) Binary Code License, but we have no current plans to open source it.

    Read the article

  • Building a Redundant / Distrubuted Application

    - by MattW
    This is more of a "point me in the right direction" question. I (and my team of 3) have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • Steam on 64-bit 14.04: need some help, missing a few 32-bit libs

    - by YellowShark
    Steam says I'm missing the following libs, I'm hoping someone can help me get things in better shape: xyz@abc:~$ STEAM_RUNTIME=0 steam Running Steam on ubuntu 14.04 64-bit STEAM_RUNTIME is disabled by the user Error: You are missing the following 32-bit libraries, and Steam may not run: libpangoft2-1.0.so.0 libpango-1.0.so.0 libgtk-x11-2.0.so.0 libgdk_pixbuf-2.0.so.0 Installing breakpad exception handler for appid(steam)/version(1401381906_client) Installing breakpad exception handler for appid(steam)/version(1401381906_client) [2014-06-11 20:45:39] Startup - updater built May 29 2014 09:19:23 [2014-06-11 20:45:39] Verifying installation... [2014-06-11 20:45:39] Verification complete [2014-06-11 20:45:42] Shutdown I tried installing the following i386 packages: libpango-1.0-0:i386, libpangoft2-1.0-0:i386, and libgdk-pixbuf2.0-0:i386, and symlinking the .so files (from usr/lib/i386whatever../) into the ~/.local/share/Steam/ubuntu12_32/ folder, but wasn't able to find the right match for the gtk-x11 lib, and ultimately would up with a different, but still non-working situation. So I've back-tracked to this point, and have removed those i386 packages for now. It's worth noting thatSteam runs if I don't use STEAM_RUNTIME=0. Also, Steam seemed to "recognize" the i386 version of the libpango & libpangoft2 libs after I symlinked them into place, during the course of my troubleshooting; when I would rerun STEAM_RUNTIME=0 steam, it wouldn't list those two items as missing anymore. Instead though, I had a bunch of gtk-related issues, something about overlay-scrollbar not available, as well as warnings that it can't find the murrine engine... a whole bunch of stuff that sounded like I'd gone too far down the wrong path. Anyhow, any help sorting this out would be appreciated, and thanks!

    Read the article

  • How to move an UIView along a curved CGPath according to user dragging the view

    - by Felipe Cypriano
    I'm trying to build a interface that the user can move his finger around the screen an a list of images moves along a path. The idea is that the images center nevers leaves de path. Most of the things I found was about how to animate using CGPath and not about actually using the path as the track to a user movement. I need to objects to be tracked on the path even if the user isn't moving his fingers over the path. For example (image bellow), if the object is at the beginning of the path and the user touches anywhere on the screen and moves his fingers from left to right I need that the object moves from left to right but following the path, that is, going up as it goes to the right towards the path's end. This is the path I've draw, imagine that I'll have a view (any image) that the user can touch and drag it along the path, there's no need to move the finger exactly over the path. If the user move from left to right the image should move from left to right but going up if need following the path. This is how I'm creating the path: CGPoint endPointUp = CGPointMake(315, 124); CGPoint endPointDown = CGPointMake(0, 403); CGPoint controlPoint1 = CGPointMake(133, 187); CGPoint controlPoint2 = CGPointMake(174, 318); CGMutablePathRef path = CGPathCreateMutable(); CGPathMoveToPoint(path, NULL, endPointUp.x, endPointUp.y); CGPathAddCurveToPoint(path, NULL, controlPoint1.x, controlPoint1.y, controlPoint2.x, controlPoint2.y, endPointDown.x, endPointDown.y); Any idead how can I achieve this?

    Read the article

  • Inheritance, commands and event sourcing

    - by Arthis
    In order not to redo things several times I wanted to factorize common stuff. For Instance, let's say we have a cow and a horse. The cow produces milk, the horse runs fast, but both eat grass. public class Herbivorous { public void EatGrass(int quantity) { var evt= Build.GrassEaten .WithQuantity(quantity); RaiseEvent(evt); } } public class Horse : Herbivorous { public void RunFast() { var evt= Build.FastRun; RaiseEvent(evt); } } public class Cow: Herbivorous { public void ProduceMilk() { var evt= Build.MilkProduced; RaiseEvent(evt); } } To eat Grass, the command handler should be : public class EatGrassHandler : CommandHandler<EatGrass> { public override CommandValidation Execute(EatGrass cmd) { Contract.Requires<ArgumentNullException>(cmd != null); var herbivorous= EventRepository.GetById<Herbivorous>(cmd.Id); if (herbivorous.IsNull()) throw new AggregateRootInstanceNotFoundException(); herbivorous.EatGrass(cmd.Quantity); EventRepository.Save(herbivorous, cmd.CommitId); } } so far so good. I get a Herbivorous object , I have access to its EatGrass function, whether it is a horse or a cow doesn't matter really. The only problem is here : EventRepository.GetById<Herbivorous>(cmd.Id) Indeed, let's imagine we have a cow that has produced milk during the morning and now wants to eat grass. The EventRepository contains an event MilkProduced, and then come the command EatGrass. With the CommandHandler, we are no longer in the presence of a cow and the herbivorious doesn't know anything about producing milk . what should it do? Ignore the event and continue , thus allowing the inheritance and "general" commands? or throw an exception to forbid execution, it would mean only CowEatGrass, and HorseEatGrass might exists as commands ? Thanks for your help, I am just beginning with these kinds of problem, and I would be glad to have some news from someone more experienced.

    Read the article

  • Oracle OpenWorld and JavaOne are right around the corner!

    - by Eric Jensen
    It's that time of year again! Oracle OpenWorld and JavaOne are now upon us. Below is a list of events and demos at OpenWorld and JavaOne where you can check out the cool stuff people are doing with our products, Berkeley DB and Database Mobile Server. We've got some exciting things lined up, hope to see you there! Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Keynote Wed 3 Oct, 2012 8:00 AM - 9:30 AM Java Embedded: Market Strategy Hotel Nikko - Nikko Ballrooms II & III Conference Session Mon 1 Oct, 2012 8:30 AM - 9:30 AM CON7100 - Developing with Berkeley DB and Oracle Database Mobile Server for Java Embedded Hotel Nikko - Nikko Ballroom II & III HOL (Hands-on Lab) Mon 1 Oct, 2012 7:30 PM – 9:30 PM HOL7889 - Java SE Embedded Development Made Easy Hilton San Francisco - Franciscan A/B/C/D Demos JavaOne Exhibition: Unifying M2M and Mobile in Healthcare Hilton Hotel Grand Ballroom Exhibition Hall Booth 5605 mFrontiers mFinity demo Moscone South Left - 136

    Read the article

  • Data binding in web UI frameworks, what's the deal?

    - by c-smile
    I believe that most of modern Web frameworks that pretend to be MVC ones also has a notion of data binding in one form or another. Examples: AngularJS, EmberJS, KnockoutJS, etc. I am assuming that "data binding" is a declarative definition (oxymoron, no?) of live link between data (a.k.a. model) and its representation (a.k.a. view). With some transformers in between (a.k.a. controllers). I understand why declarativeness is kind of appealing but also understand that as usual it comes with the price. In particular: 1. Live binding is quite heavy, either with dirty watch (high CPU consumption) or with Object.observe() (high memory consumption with high CPU load in some scenarios). 2. There is a "frame" part in the framework word, means there are some boundaries/limits that can be hard to overcome if you need slightly more than it was designed for. Quite usual time split: 90% of features are made in 10% of project time. But 10% rest take 90% of project time. I suspect (a.k.a. educated guess) that those MVC things are not helping to implement more functionality in less time... If so their usage motivation is not quite clear. As an example: last week wanted to find virtual list idea/solution. Found one in vanilla JavaScript that is 120 LOC. Implementation of the same but in AngualrJS is about 420 LOC. Most of the code there seems like a fight with the framework itself... So is my question: what benefits that MVC stuff or data binding give us? Is it just a buzzword popular among project managers or they give us something useful. If later one then what exactly?

    Read the article

  • General questions regarding open-source licensing

    - by ndg
    I'm looking to release an open-source iOS software project but I'm very new to the licensing side of the things. While I'm aware that the majority of answers here will not lawyers, I'd appreciate it if anyone could steer me in the right direction. With the exception of the following requirements I'm happy for developers to largely do whatever they want with the projects source code. I'm not interested in any copyleft licensing schemes, and while I'd like to encourage attribution in derivative works it is not required. As such, my requirements are as follows: Original source can be distributed and re-distributed (verbatim) both commercially and non-commercially as long as the original copyright information, website link and license is maintained. I wish to retain rights to any of the multi-media distributed as part of the project (sound effects, graphics, logo marks, etc). Such assets will be included to allow other developers to easily execute the project, but cannot be re-distributed in any manner. I wish to retain rights to the applications name and branding. Futher to selecting an applicable license, I have the following questions: The project makes use of a number of third-party libraries (all licensed under variants of the MIT license). I've included individual licenses within the source (and application) and believe I've met all requirements expressed in these licenses, but is there anything else that needs to be done before distributing them as part of my open-source project? Also included in my project is a single proprietary, close-sourced library that's used to power a small part of the application. I'm obviously unable to include this in the source release, but what's the best way of handling this? Should I simply weak-link the project and exclude it entirely from the Git project?

    Read the article

  • Good translation (manually) software

    - by S.Hoekstra
    I'm looking for a good translation software. i do not mean something that does all the translation automatically for me. But rather something that aids me in translating large pieces of text, since it has to be a perfect translation i can't leave it to computers alone. Something like http://translate.google.com/toolkit but with more options/functions would be great. Preferably freeware ofcourse (But paid is not a problem). At the moment i use the Google toolkit since it's adequate for now, but i really need something more advanced. But looking for such software on google etc. is really hard because of the confusion with translation services and things like babelfish. Do you know any software like this? and maybe want to share your thoughts/experiences.

    Read the article

  • OT: Fixing choppy video playback on OS X

    - by terrencebarr
    This is a bit off-topic but I wanted to share because it seems a lot of people are running into issues with choppy video playback and stutter on Mac OS X. I am using a Mac Mini with Snow Leopard (10.6.8) as a home media center and it has worked great in the past, playing back music and videos from multiple sources (web, quicktime, VLC, EyeTV). A few weeks ago the video playback from all my sources started to become choppy, to stutter, and often the picture would hang for seconds at a time. Totally unusable. Drove me nuts for two weeks. After much research and trial-and-error it turns out the problem was an outdated Flash Player which seems to have messed up the video pipeline for the entire system. The short is, I updated the Flash Player to version 11 directly from the Adobe web site, rebooted the Mac Mini, and all is well again! Judging from the various posts across the web, video playback appears to be a fairly widespread problem for Mac users and I hope this helps some of you out there! And I can’t wait to get rid of Flash altogether – I can’t remember the times it has crashed my browser, hung my system, and screwed up things. Thanks Adobe ;-( Cheers, – Terrence Filed under: Uncategorized Tagged: Adobe Flash, Mac OS X

    Read the article

  • What kind of code would Kent Beck avoid unit testing?

    - by tieTYT
    I've been watching a few of the Is TDD Dead? talks on youtube, and one of the things that surprised me is Kent Beck seems to acknowledge that there are just some kinds of programs that aren't worth unit testing. For example, right here DHH says that Kent Beck is ... very happy to say "Well, TDD doesn't fit in this case, I'm just going to bail" It's frustrating to me that Kent Beck seems to acknowledge this, but nobody asks him to elaborate on it or give concrete examples. I'd like to know the situations where Kent Beck thinks TDD is a bad fit. Nobody can read his mind or speak for him, but I'm hoping he's been transparent enough through his books/tweets/whatever for someone to be able to answer. I'm not necessarily going to take what he says as gospel, but it would be useful to know that the times I've tried TDD and it just felt impossible/useless are situations that he would have bailed on it himself. Or, if it turned out he would have tested that code it'd suggest to me that I was approaching the process very wrong. I also think it would be enlightening to understand why he would bail on such projects. My opinion on why this is not a duplicate of "When is it appropriate to not unit test?" After skimming those answers I'm not satisfied. For example, look at UncleBob's answer. He doesn't even acknowledge that such a situation exists. I really think there's value in understanding Kent Beck's position, not just a general, "What's your opinion?" type of question. After all, he's the father of TDD.

    Read the article

  • What should every programmer know about web development?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web application before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also, I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification.

    Read the article

  • Can windows XP be better than any Ubuntu (and Linux) distro for an old PC?

    - by Robert Vila
    The old laptop is a Toshiba 1800-100: CPU: Intel Celeron 800h Ram 128 MB (works ok) HDD: 15GB (works ok) Graphics adapter: Integrated 64-bit AGP graphics accelerator, BitBIT, 3D graphic acceleration, 8 MB Video RAM Only WindowsXP is installed, and works ok: it can be used, but it is slow (and hateful). I thought that I could improve performance (and its look) easily, since it is an old PC (drivers and everything known for years...) by installing a light Linux distro. So, I decided to install a light or customized Ubuntu distro, or Ubuntu/Debian derivative, but haven't been successful with any; not even booting LiveCDs: not even AntiX, not even Puppy. Lubuntu wiki says that it won't work because the last to releases need more ram (and some blogs say much more cpu -even core duo for new Lubuntu!-), let alone Xubuntu. The problems I have faced are: 1.There are thousands of pages talking about the same 10/15 lightweight distros, and saying more or less the same things, but NONE talks about a simple thing as to how should the RAM/swap-partition proportion be for this kind of installations. NONE! 2.Loading the LiveCD I have tried several different boot options (don't understand much about this and there's ALWAYS a line of explanation missing) and never receive error messages. Booting just stops at different stages but often seems to stop just when the X server is going to start. I am able to boot to command line. 3.I ignore whether the problem is ram size or a problem with the graphics driver (which surprises me because it is a well known brand and line of computers). So I don't know if doing a partition with a swap partition would help booting the LiveCD. 4.I would like to try the graphical interface with the LiveCD before installing. If doing the swap partition for this purpose would help. How can I do the partition? I tried to use Boot Rescue CD, but it advises me against continuing forward. I would appreciate any ideas as regards these questions. Thank you

    Read the article

  • Physically moving a hard drive from older iMac (c2d) to new iMac (i7) ?

    - by Inshim
    Instead of my usual habit of using superduper to mirror my drive to a new computer, I just physically moved the hard drive from an older iMac to a new one. But... it now doesn't boot, getting stuck at the apple logo screen. Since the hard drive that came with the new iMac works well, and my old drive works well when I return it to the older iMac, I conclude that there is some problem at the system/kernel level due to the different hardware. In the past I did similar things (e.g. starting a C2D machine from a Core Duo in target disk mode), so perhaps the change in architecture to the i5/i7 is too problematic? The main point: do you know of any way to get the system to rebuild for itself the proper versions of the system components when booting? Are there certain directories that I can safely delete to make that happen? Thanks

    Read the article

  • Styles of games that work at low-resolution

    - by Brendan Long
    I'm taking a class on compilers, and the goal is to write a compiler for Meggy Jr devices (Arduino). The goal is just to make a simple compilers with loops and variables and stuff. Obviously, that's lame, so the "real goal" is to make an impressive game on the device. The problem is that it only has 64 pixels to work with (technically 72, but the top 8 are single-color and not part of the main display, so they're really only useful for displaying things like money). My problem is thinking of something to do on a device that small. It doesn't really matter if it's original, but it can't be something that's already available. My first idea was "snake", but that comes with the SDK. Same with a side-scrolling shooter. Remaining ideas include a tower defense game (hard to write, hard to control), an RPG (same), tetris (lame).. The problem is that all of the games I like require a high-resolution screen because they have a lot of text. Even a really simple game like nethack would be hard because each creature would be a single color. tl;dr What styles of games require a. No text; and b. Few enough objects that representing them each with a single color is acceptable?

    Read the article

  • Good C++ books regarding Performance?

    - by Leon
    Besides the books everyone knows about, like Meyer's 3 Effective C++/STL books, are there any other really good C++ books specifically aimed towards performance code? Maybe this is for gaming, telecommunications, finance/high frequency etc? When I say performance I mean things where a normal C++ book wouldnt bother advising because the gain in performance isn't worthwhile for 95% of C++ developers. Maybe suggestions like avoiding virtual pointers, going into great depth about inlining etc? A book going into great depth on C++ memory allocation or multithreading performance would obviously be very useful.

    Read the article

  • Branded Application Pages (layouts pages) in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Application pages are now branded by default in SharePoint 2010. WOOHOO!!! The DynamicMasterPageFile attribute in SharePoint 2010 master pages allows application pages start using the site’s master page instead of the application master page. If you want backwards compatibility with SharePoint 2007, i.e. you want unbranded application pages, here is what you can do, a) You can change the MasterPageReferenceEnabled property to false in your SPWebApplication object, orb) Go to central administration\application management\manage web application\select your web app … go to the ribbon, look for general settings\general settings, and detach application pages from the site’s master page. I don’t see why you’d ever wanna do that, but hey if you want to .. go for it. This article was first published on blah.winsmarts.com. Stealing content is not cool. Safeguarded application pages Now for the fine print, there is something called as “Safeguarded application pages” in SP2010. These are pages, that IF IN CASE your custom master page screws up, they will automatically revert to use a master page that is guaranteed to work in the _layouts folder. Now that’s nice! That means, if you screw up, you always have a way to fix things. How nice! Here is a list of such safe guarded application pages - AccessDenied.aspx MngSiteAdmin.aspx People.aspx RecycleBin.aspx ReGhost.aspx ReqAcc.aspx Settings.aspx UserDisp.aspx ViewLsts.aspx Have fun! Comment on the article ....

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >