Search Results

Search found 9990 results on 400 pages for 'sampler state'.

Page 350/400 | < Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >

  • Azure Mobile Services: lessons learned

    - by svdoever
    When I first started using Azure Mobile Services I thought of it as a nice way to: authenticate my users - login using Twitter, Google, Facebook, Windows Live create tables, and use the client code to create the columns in the table because that is not possible in the Azure Mobile Services UI run some Javascript code on the table crud actions (Insert, Update, Delete, Read) schedule a Javascript to run any 15 or more minutes I had no idea of the magic that was happening inside… where is the data stored? Is it a kind of big table, are relationships between tables possible? those Javascripts on the table crud actions, is that interpreted, what is that exactly? After working for some time with Azure Mobile Services I became a lot wiser: Those tables are just normal tables in an Azure SQL Server 2012 Creating the table columns through client code sucks, at least from my Javascript code, because the columns are deducted from the sent JSON data, and a datetime field is sent as string in JSON, so a string type column is created instead of a datetime column You can connect with SQL Management Studio to the Azure SQL Server, and although you can’t manage your columns through the SQL Management Studio UI, it is possible to just run SQL scripts to drop and create tables and indices When you create a table through SQL script, add the table with the same name in the Azure Mobile Services UI to hook it up and be able to access the table through the provided abstraction layer You can also go to the SQL Database through the Azure Mobile Services UI, and from there get in a web based SQL management studio where you can create columns and manage your data The table crud scripts and the scheduler scripts are full blown node.js scripts, introducing a lot of power with great performance The web based script editor is really powerful, I do most of my editing currently in the editor which has syntax highlighting and code completing. While editing the code JsHint is used for script validation. The documentation on Azure Mobile Services is… suboptimal. It is such a pity that there is no way to comment on it so the community could fill in the missing holes, like which node modules are already loaded, and which modules are available on Azure Mobile Services. Soon I was hacking away on Azure Mobile Services, creating my own database tables through script, and abusing the read script of an empty table named query to implement my own set of “services”. The latest updates to Azure Mobile Services described in the following posts added some great new features like creating web API’s, use shared code from your scripts, command line tools for managing Azure Mobile Services (upload and download scripts for example), support for node modules and git support: http://weblogs.asp.net/scottgu/archive/2013/06/14/windows-azure-major-updates-for-mobile-backend-development.aspx http://blogs.msdn.com/b/carlosfigueira/archive/2013/06/14/custom-apis-in-azure-mobile-services.aspx http://blogs.msdn.com/b/carlosfigueira/archive/2013/06/19/custom-api-in-azure-mobile-services-client-sdks.aspx In the mean time I rewrote all my “service-like” table scripts to API scripts, which works like a breeze. Bad thing with the current state of Azure Mobile Services is that the git support is not working if you are a co-administrator of your Azure subscription, and not and administrator (as in my case). Another bad thing is that Cross Origin Request Sharing (CORS) is not supported for the API yet, so no go yet from the browser client for API’s, which is my case. See http://social.msdn.microsoft.com/Forums/windowsazure/en-US/2b79c5ea-d187-4c2b-823a-3f3e0559829d/known-limitations-for-source-control-and-custom-api-features for more on these and other limitations. In his talk at Build 2013 Josh Twist showed that there is a work-around for accessing shared script code from the table scripts as well (another limitation mentioned in the post above). I could not find that code in the Votabl2 code example from the presentation at https://github.com/joshtwist/votabl2, but we can grab it from the presentation when it comes online on Channel9. By the way: you can always express your needs and ideas at http://mobileservices.uservoice.com, that’s the place they are listening to (I hope!).

    Read the article

  • Get to Know a Candidate (8 of 25): Rocky Anderson&ndash;Justice Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Ross Carl “Rocky” Anderson served two terms as the 33rd mayor of Salt Lake City, Utah, between 2000 and 2008.  He is the Executive Director of High Road for Human Rights.  Prior to serving as Mayor, he practiced law for 21 years in Salt Lake City, during which time he was listed in Best Lawyers in America, was rated A-V (highest rating) by Martindale-Hubbell, served as Chair of the Utah State Bar Litigation Section[4] and was Editor-in-Chief of, and a contributor to, Voir Dire legal journal. As mayor, Anderson rose to nationwide prominence as a champion of several national and international causes, including climate protection, immigration reform, restorative criminal justice, LGBT rights, and an end to the "war on drugs". Before and after the invasion by the U.S. of Iraq in 2003, Anderson was a leading opponent of the invasion and occupation of Iraq and related human rights abuses. Anderson was the only mayor of a major U.S. city who advocated for the impeachment of President George W. Bush, which he did in many venues throughout the United States. Anderson's work and advocacy led to local, national, and international recognition in numerous spheres, including being named by Business Week as one of the top twenty activists in the world on climate change,serving on the Newsweek Global Environmental Leadership Advisory Board, and being recognized by the Human Rights Campaign as one of the top ten straight advocates in the United States for LGBT equality. He has also received numerous awards for his work, including the EPA Climate Protection Award, the Sierra Club Distinguished Service Award, the Respect the Earth Planet Defender Award, the National Association of Hispanic Publications Presidential Award, The Drug Policy Alliance Richard J. Dennis Drugpeace Award, the Progressive Democrats of America Spine Award, the League of United Latin American Citizens Profile in Courage Award, the Bill of Rights Defense Committee Patriot Award, the Code Pink (Salt Lake City) Pink Star honor, the Morehouse University Gandhi, King, Ikeda Award, and the World Leadership Award for environmental programs. Formerly a member of the Democratic Party, Anderson expressed his disappointment with that Party in 2011, stating, “The Constitution has been eviscerated while Democrats have stood by with nary a whimper. It is a gutless, unprincipled party, bought and paid for by the same interests that buy and pay for the Republican Party." Anderson announced his intention to run for President in 2012 as a candidate for the newly-formed Justice Party. Although founded by Rocky Anderson of Utah, the Justice Party was first recognized by Mississippi and describes itself as advocating economic justice through measures such as green jobs and a right to organize, environment justice through enforcing employee safeguards in trade agreements, and social and civic justice through universal health care. In its first press release, the Utah Justice Party set forth its goals for justice in the economic, environmental, social and civic realms, along with a call to rid the corrupting influence of big money from government, to reverse the erosion of rights guaranteed by the Constitution, and to stop draining American resources to support illegal wars of aggression. Its press release says its grassroots supporters believe that now is the time for all to "shed their skeptical view that their voices don't matter", that "our 2-party system is a 'duopoly' controlled by the same corporate and military interests", and that the people must act to ensure "that our nation will achieve a brighter, sustainable future.” Anderson has ballot access in CO, CT, FL, ID, LA, MI, MN, MS, NJ, NM, OR, RI, TN, UT, VT, WA (152 electoral votes) and has write-in access in AL, AK, DE, GA, IL, IO, KS, MD, MO, NE, NH, NY, PA, TX Learn more about Rocky Anderson and Justice Party on Wikipedia.

    Read the article

  • Data Security Through Structure, Procedures, Policies, and Governance

    Security Structure and Procedures One of the easiest ways to implement security is through the use of structure, in particular the structure in which data is stored. The preferred method for this through the use of User Roles, these Roles allow for specific access to be granted based on what role a user plays in relation to the data that they are manipulating. Typical data access actions are defined by the CRUD Principle. CRUD Principle: Create New Data Read Existing Data Update Existing Data Delete Existing Data Based on the actions assigned to a role assigned, User can manipulate data as they need to preform daily business operations.  An example of this can be seen in a hospital where doctors have been assigned Create, Read, Update, and Delete access to their patient’s prescriptions so that a doctor can prescribe and adjust any existing prescriptions as necessary. However, a nurse will only have Read access on the patient’s prescriptions so that they will know what medicines to give to the patients. If you notice, they do not have access to prescribe new prescriptions, update or delete existing prescriptions because only the patient’s doctor has access to preform those actions. With User Roles comes responsibility, companies need to constantly monitor data access to ensure that the proper roles have the most appropriate access levels to ensure users are not exposed to inappropriate data.  In addition this also protects rouge employees from gaining access to critical business information that could be destroyed, altered or stolen. It is important that all data access is monitored because of this threat. Security Governance Current Data Governance laws regarding security Health Insurance Portability and Accountability Act (HIPAA) Sarbanes-Oxley Act Database Breach Notification Act The US Department of Health and Human Services defines HIIPAA as a Privacy Rule. This legislation protects the privacy of individually identifiable health information. Currently, HIPAA   sets the national standards for securing electronically protected health records. Additionally, its confidentiality provisions protect identifiable information being used to analyze patient safety events and improve patient safety. In 2002 after the wake of the Enron and World Com Financial scandals Senator Paul Sarbanes and Representative Michael Oxley lead the creation of the Sarbanes-Oxley Act. This act administered by the Securities and Exchange Commission (SEC) dramatically altered corporate financial practices and data governance. In addition, it also set specific deadlines for compliance. The Sarbanes-Oxley is not a set of standard business rules and does not specify how a company should retain its records; In fact, this act outlines which pieces of data are to be stored as well as the storage duration. The Database Breach Notification Act requires companies, in the event of a data breach containing personally identifiable information, to notify all California residents whose information was stored on the compromised system at the time of the event, according to Gregory Manter. He further explains that this act is only California legislation. However, it does affect “any person or business that conducts business in California, and that owns or licenses computerized data that includes personal information,” regardless of where the compromised data is located.  This will force any business that maintains at least limited interactions with California residents will find themselves subject to the Act’s provisions. Security Policies All companies must work in accordance with the appropriate city, county, state, and federal laws. One way to ensure that a company is legally compliant is to enforce security policies that adhere to the appropriate legislation in their area or areas that they service. These types of polices need to be mandated by a company’s Security Officer. For smaller companies, these policies need to come from executives, Directors, and Owners.

    Read the article

  • Stir Trek 2: Iron Man Edition

    Next month (7 May 2010) Ill be presenting at the second annual Stir Trek event in Columbus, Ohio. Stir Trek (so named because last year its themes mixed MIX and the opening of the Star Trek movie) is a very cool local event.  Its a lot of fun to present at and to attend, because of its unique venue: a movie theater.  And whats more, the cost of admission includes a private showing of a new movie (this year: Iron Man 2).  The sessions cover a variety of topics (not just Microsoft), similar to CodeMash.  The event recently sold out, so Im not telling you all of this so that you can go and sign up (though I believe you can get on the waitlist still).  Rather, this is pretty much just an excuse for me to talk about my session as a way to organize my thoughts. Im actually speaking on the same topic as I did last year, but the key difference is that last year the subject of my session was nowhere close to being released, and this year, its RTM (as of last week).  Thats right, the topic is Whats New in ASP.NET 4 how did you guess? Whats New in ASP.NET 4 So, just what *is* new in ASP.NET 4?  Hasnt Microsoft been spending all of their time on Silverlight and MVC the last few years?  Well, actually, no.  There are some pretty cool things that are now available out of the box in ASP.NET 4.  Theres a nice summary of the new features on MSDN.  Here is my super-brief summary: Extensible Output Caching use providers like distributed cache or file system cache Preload Web Applications IIS 7.5 only; avoid the startup tax for your site by preloading it. Permanent (301) Redirects are finally supported by the framework in one line of code, not two. Session State Compression Can speed up session access in a web farm environment.  Test it to see. Web Forms Features several of which mirror ASP.NET MVC advantages (viewstate, control ids) Set Meta Keywords and Description easily Granular and inheritable control over ViewState Support for more recent browsers and devices Routing (introduced in 3.5 SP1) some new features and zero web.config changes required Client ID control makes client manipulation of DOM elements much simpler. Row Selection in Data Controls fixed (id based, not row index based) FormView and ListView enhancements (less markup, more CSS compliant) New QueryExtender control makes filtering data from other Data Source Controls easy More CSS and Accessibility support Reduction of Tables and more control over output for other template controls Dynamic Data enhancements More control templates Support for inheritance in the Data Model New Attributes ASP.NET Chart Control (learn more) Lots of IDE enhancements Web Deploy tool My session will cover many but not all of these features.  Theres only an hour (3pm-4pm), and its right before the prize giveaway and movie showing, so Ill be moving quickly and most likely answering questions off-line via email after the talk. Hope to see you there! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Help finding missing mumble-server dependencies

    - by Otoris
    I'm trying to install the mumble-server package using apt-get install mumble-server on Ubuntu 11.10 Server Edition on Rackspace Cloud. Problem is it can't find dependencies it should have found because they exist on launchpad.net? Dependencies message: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: mumble-server : Depends: libavahi-compat-libdnssd1 (>= 0.6.16) but it is not installable Depends: libprotobuf7 but it is not installable Depends: libqt4-dbus (>= 4:4.5.3) but it is not installable Depends: libqt4-network (>= 4:4.5.3) but it is not installable Depends: libqt4-sql (>= 4:4.5.3) but it is not installable Depends: libqt4-xml (>= 4:4.5.3) but it is not installable Depends: libqtcore4 (>= 4:4.7.0~beta1) but it is not installable Depends: libqt4-sql-sqlite but it is not installable E: Unable to correct problems, you have held broken packages. Any ideas on if I might be missing sources? I've been googling around and haven't found anyone else in this situation or anyone else not able to install the aforementioned packages. Thanks for your time! sources.list: deb http://mirror.rackspace.com/ubuntu/ oneiric restricted deb-src http://mirror.rackspace.com/ubuntu/ oneiric restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://mirror.rackspace.com/ubuntu/ oneiric-updates restricted deb-src http://mirror.rackspace.com/ubuntu/ oneiric-updates restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://mirror.rackspace.com/ubuntu/ oneiric universe deb-src http://mirror.rackspace.com/ubuntu/ oneiric universe deb http://mirror.rackspace.com/ubuntu/ oneiric-updates universe deb-src http://mirror.rackspace.com/ubuntu/ oneiric-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://mirror.rackspace.com/ubuntu/ oneiric multiverse deb-src http://mirror.rackspace.com/ubuntu/ oneiric multiverse deb http://mirror.rackspace.com/ubuntu/ oneiric-updates multiverse deb-src http://mirror.rackspace.com/ubuntu/ oneiric-updates multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb http://us.archive.ubuntu.com/ubuntu/ oneiric-backports restricted universe multiverse # deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric-backports restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. This software is not part of Ubuntu, but is ## offered by Canonical and the respective vendors as a service to Ubuntu ## users. # deb http://archive.canonical.com/ubuntu oneiric partner # deb-src http://archive.canonical.com/ubuntu oneiric partner deb http://security.ubuntu.com/ubuntu oneiric-security restricted deb-src http://security.ubuntu.com/ubuntu oneiric-security restricted deb http://security.ubuntu.com/ubuntu oneiric-security universe deb-src http://security.ubuntu.com/ubuntu oneiric-security universe deb http://security.ubuntu.com/ubuntu oneiric-security multiverse deb-src http://security.ubuntu.com/ubuntu oneiric-security multiverse # Cool Kid Webmin/Usermin Here Brah deb http://download.webmin.com/download/repository sarge contrib deb http://webmin.mirror.somersettechsolutions.co.uk/repository sarge contrib

    Read the article

  • LibGdx drawing weird behaviour

    - by Ryckes
    I am finding strange behaviour while rendering TextureRegions in my game, only when pausing it. I am making a game for Android, in Java with LibGdx. When I comment out the line "drawLevelPaused()" everything seems to work fine, both running and paused. When it's not commented, everything works fine until I pause the screen, then it draws those two rectangles, but maybe ships are not shown, and if I comment out drawShips() and drawTarget() (just trying) maybe one of the planets disappears, or if I change the order, other things disappear and those that disappeared before now are rendered again. I can't find the way to fix this behaviour I beg your help, and I hope it's my mistake and not a LibGdx issue. I use OpenGL ES 2.0, stated in AndroidManifest.xml, if it is of any help. Thank you in advance. My Screen render method(game loop) is as follows: @Override public void render(float delta) { Gdx.gl.glClearColor(0.1f, 0.1f, 0.1f, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); controller.update(delta); renderer.render(); } When world state is PAUSED controller.update does nothing at all, there is a switch in it. And renderer.render() is as follows: public void render() { int worldState=this.world.getWorldState(); updateCamera(); spriteBatch.begin(); drawPlanets(); drawTarget(); drawShips(); if(worldState==World.PAUSED) { drawLevelPaused(); } else if(worldState==World.LEVEL_WON) { drawLevelWin(); } spriteBatch.end(); } And those methods are: private void updateCamera() { this.offset=world.getCameraOffset(); } private void drawPlanets() { for(Planet planet : this.world.getPlanets()) { this.spriteBatch.draw(this.textures.getTexture(planet.getTexture()), (planet.getPosition().x - this.offset[0]) * ppuX, (planet.getPosition().y - this.offset[1]) * ppuY); } } private void drawTarget() { Target target=this.world.getTarget(); this.spriteBatch.draw(this.textures.getTexture(target.getTexture()), (target.getPosition().x - this.offset[0]) * ppuX, (target.getPosition().y - this.offset[1]) * ppuY); } private void drawShips() { for(Ship ship : this.world.getShips()) { this.spriteBatch.draw(this.textures.getTexture(ship.getTexture()), (ship.getPosition().x - this.offset[0]) * ppuX, (ship.getPosition().y - this.offset[1]) * ppuY, ship.getBounds().width*ppuX/2, ship.getBounds().height*ppuY/2, ship.getBounds().width*ppuX, ship.getBounds().height*ppuY, 1.0f, 1.0f, ship.getAngle()-90.0f); } if(this.world.getStillShipVisibility()) { Ship ship=this.world.getStillShip(); Arrow arrow=this.world.getArrow(); this.spriteBatch.draw(this.textures.getTexture(ship.getTexture()), (ship.getPosition().x - this.offset[0]) * ppuX, (ship.getPosition().y - this.offset[1]) * ppuY, ship.getBounds().width*ppuX/2, ship.getBounds().height*ppuY/2, ship.getBounds().width*ppuX, ship.getBounds().height*ppuY, 1f, 1f, ship.getAngle() - 90f); this.spriteBatch.draw(this.textures.getTexture(arrow.getTexture()), (ship.getCenter().x - this.offset[0] - arrow.getBounds().width/2) * ppuX, (ship.getCenter().y - this.offset[1]) * ppuY, arrow.getBounds().width*ppuX/2, 0, arrow.getBounds().width*ppuX, arrow.getBounds().height*ppuY, 1f, arrow.getRate(), ship.getAngle() - 90f); } } private void drawLevelPaused() { this.shapeRenderer.begin(ShapeType.FilledRectangle); this.shapeRenderer.setColor(0f, 0f, 0f, 0.8f); this.shapeRenderer.filledRect(0, 0, this.width/this.ppuX, PAUSE_MARGIN_HEIGHT/this.ppuY); this.shapeRenderer.filledRect(0, (this.height-PAUSE_MARGIN_HEIGHT)/this.ppuY, this.width/this.ppuX, PAUSE_MARGIN_HEIGHT/this.ppuY); this.shapeRenderer.end(); for(Button button : this.world.getPauseButtons()) { this.spriteBatch.draw(this.textures.getTexture(button.getTexture()), (button.getPosition().x - this.offset[0]) * this.ppuX, (button.getPosition().y - this.offset[1]) * this.ppuY); } }

    Read the article

  • How to Open Any Folder as a Project in the NetBeans Platform

    - by Geertjan
    Typically, as described in the NetBeans Project Type Tutorial, you'll define a project type based on the presence of a file (e.g., "project.xml" or "customer.txt" or something like that) in a folder. I.e., if the file is there, then its parent, i.e., the folder that contains the file, is a project and should be opened in your application. However, in some scenarios (as with the HTML5 project type introduced in NetBeans IDE 7.3), the user should be able to open absolutely any folder at all into the application. How to create a project type that is that liberal? Here you go, the only condition that needs to be true is that the selected item in the "Open Project" dialog is a folder, as defined in the "isProject" method below. Nothing else. That's it. If you select a folder, it will be opened in your application, displaying absolutely everything as-is (since below there's no ProjectLogicalView defined): import java.beans.PropertyChangeListener; import java.io.IOException; import javax.swing.Icon; import org.netbeans.api.project.Project; import org.netbeans.api.project.ProjectInformation; import org.netbeans.spi.project.ProjectFactory; import org.netbeans.spi.project.ProjectState; import org.openide.filesystems.FileObject; import org.openide.loaders.DataFolder; import org.openide.loaders.DataObjectNotFoundException; import org.openide.nodes.FilterNode; import org.openide.util.Exceptions; import org.openide.util.ImageUtilities; import org.openide.util.Lookup; import org.openide.util.lookup.Lookups; import org.openide.util.lookup.ServiceProvider; @ServiceProvider(service = ProjectFactory.class) public class FolderProjectFactory implements ProjectFactory { @Override public boolean isProject(FileObject projectDirectory) { return DataFolder.findFolder(projectDirectory) != null; } @Override public Project loadProject(FileObject dir, ProjectState state) throws IOException { return isProject(dir) ? new FolderProject(dir) : null; } @Override public void saveProject(Project prjct) throws IOException, ClassCastException { // leave unimplemented for the moment } private class FolderProject implements Project { private final FileObject projectDir; private Lookup lkp; private FolderProject(FileObject dir) { this.projectDir = dir; } @Override public FileObject getProjectDirectory() { return projectDir; } @Override public Lookup getLookup() { if (lkp == null) { lkp = Lookups.fixed(new Object[]{ new Info(), }); } return lkp; } private final class Info implements ProjectInformation { @Override public Icon getIcon() { Icon icon = null; try { icon = ImageUtilities.image2Icon( new FilterNode(DataFolder.find( getProjectDirectory()).getNodeDelegate()).getIcon(1)); } catch (DataObjectNotFoundException ex) { Exceptions.printStackTrace(ex); } return icon; } @Override public String getName() { return getProjectDirectory().getName(); } @Override public String getDisplayName() { return getName(); } @Override public void addPropertyChangeListener(PropertyChangeListener pcl) { //do nothing, won't change } @Override public void removePropertyChangeListener(PropertyChangeListener pcl) { //do nothing, won't change } @Override public Project getProject() { return FolderProject.this; } } } } Even the ProjectInformation implementation really isn't needed at all, since it provides nothing more than the icon in the "Open Project" dialog, the rest (i.e., the display name in the "Open Project" dialog) is provided by default regardless of whether you have a ProjectInformation implementation or not.

    Read the article

  • Visual Studio 2012 and .NET 4.5 now Live!

    - by Tarun Arora
    Today was the formal launch event for Visual Studio 2012 and .NET 4.5, a state-of-the-art development solution for building modern applications that span connected devices and continuous services, from the client to the cloud. The event was streamed live from http://visualstudiolaunch.com, S.Somasegar corporate vice president of the Developer Division opened the key note, Jason Zander dived deeper into how to leverage Visual Studio 2012 and .NET 4.5 to build modern application. Brian Harry all the awesome features in Visual Studio 2012 to improve the application lifecycle management.   I. Summary of the announcements made today 1. Visual Studio Updates coming this fall –  VS Update will better support agile teams, enable continuous quality, elevate SharePoint development with application lifecycle management (ALM) tools, and expand Visual Studio 2012 Windows development capabilities. It will be available as a community technology preview (CTP) later this month and in final release later this calendar year. A comprehensive list of what will be on offer can be found here. 2. Visual Studio Express 2012 for Windows Desktop – Visual Studio Express 2012 for Windows Desktop brings the newest desktop development capabilities in Visual Studio 2012 to Express users, too. You would be excited to know that the express SKU will support Integration with TFS among some of the other cool features I would like to mention Unit Testing, Unit Testing, Code Analysis, dependency management with NuGet a full list and download links can be found here. 3. F# tools for Visual Studio Express 2012 for web –  This F# Tools release adds in F# 3.0 components, such as the F# 3.0 compiler, F# Interactive, IDE support, and new F# features such as type providers and query expressions to your Visual Studio 2012 express for web. More details and download links can be found here. 4. Visual Studio TFS 2012 Power Tools – The TFS 2012 Power tools brings the goodness of Best Practice Analyzer, Process Template Editor, Storyboard Shapes, Team Explorer enhancements, TFPT command line, TFS Server Backups, etc via to your TFS 2012 installation. It can be downloaded right away from here. II. Road shows There will be many more community road shows this month packaged with hours of demos and discussions. The Visual Studio UK Team has just announced that there will be four UK launch events, face to face session including a product group speaker and partner sessions: Edinburgh, 1st October Manchester, 3rd October London, 4th October Reading, 5th October III. Get Started Download Visual Studio 2012 and the additional supporting software's from here. The Visual Studio development team has put together over 60 videos to help you learn about the new Visual Studio 2012 capabilities in more detail, and all of these will be available for watching here. IV. What’s Next A lot more exciting stuff lined up… Windows 8 Anticipated release: Oct. 26 (UPDATED 9/12) Windows Server 2012 Released (UPDATED 9/4) System Center 2012 Released (UPDATED 9/11) SQL Server 2012 Released (UPDATED 4/2) Internet Explorer 10 Anticipated release: Between Q3 2012 and early 2013 (UPDATED 5/3   Office 2013 Anticipated release: Q4 2012 or Q1 2013(UPDATED 9/12) Exchange 2013 Anticipated release: Q4 2012 (UPDATED 7/26) Visual Studio 2012 Released (UPDATED 9/12) Kinect for Windows Released (UPDATED 9/4) Windows Phone "Tango" and 8 "Tango": Released; Anticipated "Windows Phone 8" release: Q4 2012 (UPDATED 9/5) Dynamics ERP Online Anticipated release: September or October 2012 (UPDATED 7/20) Office 365 Anticipated update schedule: "Almost weekly"(UPDATED 9/12) Windows Azure Rumored CTP release: Spring 2012 (UPDATED 9/7) SharePoint 2013 Anticipated release: Q4 2012 (UPDATED 8/21) Enjoy

    Read the article

  • Create a Remote Git Repository from an Existing XCode Repository

    - by codeWithoutFear
    Introduction Distributed version control systems (VCS’s), like Git, provide a rich set of features for managing source code.  Many development tools, including XCode, provide built-in support for various VCS’s.  These tools provide simple configuration with limited customization to get you up and running quickly while still providing the safety net of basic version control. I hate losing (and re-doing) work.  I have OCD when it comes to saving and versioning source code.  Save early, save often, and commit to the VCS often.  I also hate merging code.  Smaller and more frequent commits enable me to minimize merge time and effort as well. The work flow I prefer even for personal exploratory projects is: Make small local changes to the codebase to create an incrementally improved (and working) system. Commit these changes to the local repository.  Local repositories are quick to access, function even while offline, and provides the confidence to continue making bold changes to the system.  After all, I can easily recover to a recent working state. Repeat 1 & 2 until the codebase contains “significant” functionality and I have connectivity to the remote repository. Push the accumulated changes to the remote repository.  The smaller the change set, the less likely extensive merging will be required.  Smaller is better, IMHO. The remote repository typically has a greater degree of fault tolerance and active management dedicated to it.  This can be as simple as a network share that is backed up nightly or as complex as dedicated hardware with specialized server-side processing and significant administrative monitoring. XCode’s out-of-the-box Git integration enables steps 1 and 2 above.  Time Machine backups of the local repository add an additional degree of fault tolerance, but do not support collaboration or take advantage of managed infrastructure such as on-premises or cloud-based storage. Creating a Remote Repository These are the steps I use to enable the full workflow identified above.  For simplicity the “remote” repository is created on the local file system.  This location could easily be on a mounted network volume. Create a Test Project My project is called HelloGit and is located at /Users/Don/Dev/HelloGit.  Be sure to commit all outstanding changes.  XCode always leaves a single changed file for me after the project is created and the initial commit is submitted. Clone the Local Repository We want to clone the XCode-created Git repository to the location where the remote repository will reside.  In this case it will be /Users/Don/Dev/RemoteHelloGit. Open the Terminal application. Clone the local repository to the remote repository location: git clone /Users/Don/Dev/HelloGit /Users/Don/Dev/RemoteHelloGit Convert the Remote Repository to a Bare Repository The remote repository only needs to contain the Git database.  It does not need a checked out branch or local files. Go to the remote repository folder: cd /Users/Don/Dev/RemoteHelloGit Indicate the repository is “bare”: git config --bool core.bare true Remove files, leaving the .git folder: rm -R * Remove the “origin” remote: git remote rm origin Configure the Local Repository The local repository should reference the remote repository.  The remote name “origin” is used by convention to indicate the originating repository.  This is set automatically when a repository is cloned.  We will use the “origin” name here to reflect that relationship. Go to the local repository folder: cd /Users/Don/Dev/HelloGit Add the remote: git remote add origin /Users/Don/Dev/RemoteHelloGit Test Connectivity Any changes made to the local Git repository can be pushed to the remote repository subject to the merging rules Git enforces. Create a new local file: date > date.txt /li> Add the new file to the local index: git add date.txt Commit the change to the local repository: git commit -m "New file: date.txt" Push the change to the remote repository: git push origin master Now you can save, commit, and push/pull to your OCD hearts’ content! Code without fear! --Don

    Read the article

  • Swap not available on System Monitor

    - by Zaki
    I had a swap partition of 1GB (RAM 1GB, Ubuntu 12.04 lts). Now swap is not shown on System Monitor neither can I hibernate my pc (sudo pm-hibernate). blkid output: /dev/sda1: UUID="B8B4FBB1B4FB706C" TYPE="ntfs" /dev/sda2: UUID="2ea7d608-2d89-4e41-9436-d05cb3ce8871" TYPE="swap" /dev/sda3: UUID="3219d03a-67e4-454b-8ce7-a27831846e35" TYPE="ext4" /dev/sda5: LABEL="Softwares" UUID="AC1CC3301CC2F47C" TYPE="ntfs" /dev/sda6: LABEL="Education" UUID="1E103E6C103E4B53" TYPE="ntfs" /dev/sda7: LABEL="Recreation" UUID="2CC8D181C8D149AA" TYPE="ntfs" /dev/sda8: LABEL="Miscellaneous" UUID="0274D6B174D6A727" TYPE="ntfs" /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda6 during installation UUID=3219d03a-67e4-454b-8ce7-a27831846e35 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=2ea7d608-2d89-4e41-9436-d05cb3ce8871 none swap sw 0 0 free -m total used free shared buffers cached Mem: 991 867 123 0 27 418 -/+ buffers/cache: 421 569 Swap: 0 0 0 cat /proc/swaps Filename Type Size Used Priority fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9f369f36 Device Boot Start End Blocks Id System /dev/sda1 * 63 31471334 15735636 7 HPFS/NTFS/exFAT /dev/sda2 31471616 33470447 999416 82 Linux swap / Solaris /dev/sda3 33472512 62539775 14533632 83 Linux /dev/sda4 62541045 312592769 125025862+ f W95 Ext'd (LBA) /dev/sda5 62541108 125066024 31262458+ 7 HPFS/NTFS/exFAT /dev/sda6 125066088 187591004 31262458+ 7 HPFS/NTFS/exFAT /dev/sda7 187591068 250115984 31262458+ 7 HPFS/NTFS/exFAT /dev/sda8 250116048 312576704 31230328+ 7 HPFS/NTFS/exFAT swapon --all swapon: /dev/sda2: swapon failed: Invalid argument dmesg | grep -A 5 -B 5 -i swap [ 9.487404] EXT4-fs (sda3): ext4_orphan_cleanup: deleting unreferenced inode 131645 [ 9.487413] EXT4-fs (sda3): ext4_orphan_cleanup: deleting unreferenced inode 131330 [ 9.487418] EXT4-fs (sda3): 16 orphan inodes deleted [ 9.487420] EXT4-fs (sda3): recovery complete [ 9.578600] EXT4-fs (sda3): mounted filesystem with ordered data mode. Opts: (null) [ 20.580539] Swap area shorter than signature indicates [ 20.588363] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 20.619443] udevd[330]: starting version 175 [ 20.649959] lp: driver loaded but no devices found [ 20.662972] [drm] Initialized drm 1.1.0 20060810 [ 20.675515] i915 0000:00:02.0: setting latency timer to 64 -- [ 72.288573] PM: thaw of drv:sr dev:3:0:0:0 complete after 178.143 msecs [ 72.288578] PM: thaw of drv:scsi_device dev:3:0:0:0 complete after 178.136 msecs [ 72.299677] PM: thaw of drv:scsi_device dev:2:0:0:0 complete after 189.270 msecs [ 72.309473] PM: thaw of devices complete after 202.763 msecs [ 72.309668] PM: writing image. [ 72.309670] PM: Cannot find swap device, try swapon -a. [ 72.309699] PM: Cannot get swap writer [ 72.329896] Restarting tasks ... done. [ 72.331777] PM: Basic memory bitmaps freed [ 72.331792] video LNXVIDEO:00: Restoring backlight state [ 72.420048] option1 ttyUSB0: option_instat_callback: error -84 [ 72.804047] option1 ttyUSB0: option_instat_callback: error -84 -- [ 145.960625] sd 7:0:0:0: Attached scsi generic sg2 type 0 [ 145.972036] sd 7:0:0:0: [sdb] Attached SCSI removable disk [ 172.430508] PPP BSD Compression module registered [ 172.455583] PPP Deflate Compression module registered [ 332.260789] type=1400 audit(1381814763.342:27): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=636 comm="cupsd" pid=636 comm="cupsd" capability=36 capname="block_suspend" [ 1913.030998] Swap area shorter than signature indicates [ 2022.530155] type=1400 audit(1381816453.610:28): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=636 comm="cupsd" pid=636 comm="cupsd" capability=36 capname="block_suspend" [ 4062.729509] Swap area shorter than signature indicates Please help. Thanks in advance. df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 14G 6.1G 7.0G 47% / udev 488M 4.0K 488M 1% /dev tmpfs 199M 868K 198M 1% /run none 5.0M 4.0K 5.0M 1% /run/lock none 496M 224K 496M 1% /run/shm

    Read the article

  • Session and Pop Up Window

    - by imran_ku07
     Introduction :        Session is the secure state management. It allows the user to store their information in one page and access in another page. Also it is so much powerful that store any type of object. Every user's session is identified by their cookie, which client presents to server. But unfortunately when you open a new pop up window, this cookie is not post to server with request, due to which server is unable to identify the session data for current user.         In this Article i will show you how to handle this situation,  Description :         During working in a application, i was getting an Exception saying that Session is null, when a pop window opens. After seeing the problem more closely i found that ASP.NET_SessionId cookie for parent page is not post in cookie header of child (popup) window.         Therefore for making session present in both parent and child (popup) window, you have to present same cookie. For cookie sharing i passed parent SessionID in query string,   window.open('http://abc.com/s.aspx?SASID=" & Session.SessionID &','V');           and in Application_PostMapRequestHandler application Event, check if the current request has no ASP.NET_SessionId cookie and SASID query string is not null then add this cookie to Request before Session is acquired, so that Session data remain same for both parent and popup window.    Private Sub Application_PostMapRequestHandler(ByVal sender As Object, ByVal e As EventArgs)           If (Request.Cookies("ASP.NET_SessionId") Is Nothing) AndAlso (Request.QueryString("SASID") IsNot Nothing) Then               Request.Cookies.Add(New HttpCookie("ASP.NET_SessionId", Request.QueryString("SASID")))           End If       End Sub           Now access Session in your parent and child window without any problem. How this works :          ASP.NET (both Web Form or MVC) uses a cookie (ASP.NET_SessionId) to identify the user who is requesting. Cookies are may be persistent (saved permanently in user cookies ) or non-persistent (saved temporary in browser memory). ASP.NET_SessionId cookie saved as non-persistent. This means that if the user closes the browser, the cookie is immediately removed. This is a sensible step that ensures security. That's why ASP.NET unable to identify that the request is coming from the same user. Therefore every browser instance get it's own ASP.NET_SessionId. To resolve this you need to present the same parent ASP.NET_SessionId cookie to the server when open a popup window.           You can confirm this situation by using some tools like Firebug, Fiddler,  Summary :          Hopefully you will enjoy after reading this article, by seeing that how to workaround the problem of sharing Session between different browser instances by sharing their Session identifier Cookie.

    Read the article

  • Node.js Lockstep Multiplayer Architecture

    - by Wakaka
    Background I'm using the lockstep model for a multiplayer Node.js/Socket.IO game in a client-server architecture. User input (mouse or keypress) is parsed into commands like 'attack' and 'move' on the client, which are sent to the server and scheduled to be executed on a certain tick. This is in contrast to sending state data to clients, which I don't wish to use due to bandwidth issues. Each tick, the server will send the list of commands on that tick (possibly empty) to each client. The server and all clients will then process the commands and simulate that tick in exactly the same way. With Node.js this is actually quite simple due to possibility of code sharing between server and client. I'll just put the deterministic simulator in the /shared folder which can be run by both server and client. The server simulation is required so that there is an authoritative version of the simulation which clients cannot alter. Problem Now, the game has many entity classes, like Unit, Item, Tree etc. Entities are created in the simulator. However, for each class, it has some methods that are shared and some that are client-specific. For instance, the Unit class has addHp method which is shared. It also has methods like getSprite (gets the image of the entity), isVisible (checks if unit can be seen by the client), onDeathInClient (does a bunch of stuff when it dies only on the client like adding announcements) and isMyUnit (quick function to check if the client owns the unit). Up till now, I have been piling all the client functions into the shared Unit class, and adding a this.game.isServer() check when necessary. For instance, when the unit dies, it will call if (!this.game.isServer()) { this.onDeathInClient(); }. This approach has worked pretty fine so far, in terms of functionality. But as the codebase grew bigger, this style of coding seems a little strange. Firstly, the client code is clearly not shared, and yet is placed under the /shared folder. Secondly, client-specific variables for each entity are also instantiated on the server entity (like unit.sprite) and can run into problems when the server cannot instantiate the variable (it doesn't have Image class like on browsers). So my question is, is there a better way to organize the client code, or is this a common way of doing things for lockstep multiplayer games? I can think of a possible workaround, but it does have its own problems. Possible workaround (with problems) I could use Javascript mixins that are only added when in a browser. Thus, in the /shared/unit.js file in the /shared folder, I would have this code at the end: if (typeof exports !== 'undefined') module.exports = Unit; else mixin(Unit, LocalUnit); Then I would have /client/localunit.js store an object LocalUnit of client-side methods for Unit. Now, I already have a publish-subscribe system in place for events in the simulator. To remove the this.game.isServer() checks, I could publish entity-specific events whenever I want the client to do something. For instance, I would do this.publish('Death') in /shared/unit.js and do this.subscribe('Death', this.onDeathInClient) in /client/localunit.js. But this would make the simulator's event listeners list on the server and the client different. Now if I want to clear all subscribed events only from the shared simulator, I can't. Of course, it is possible to create two event subscription systems - one client-specific and one shared - but now the publish() method would have to do if (!this.game.isServer()) { this.publishOnClient(event); }. All in all, the workaround off the top of my head seems pretty complicated for something as simple as separating the client and shared code. Thus, I wonder if there is an established and simpler method for better code organization, hopefully specific to Node.js games.

    Read the article

  • NetworkManager Applet shows no networks

    - by Kkelk
    I am "the friend" referred to in the questions here and here. I decided to come and ask a question myself, as I can still not connect to the wireless network. I downloaded Keryx, as suggested here, and managed to download the necessary package and its dependencies. When I attempted to install the packages on Ubuntu using Keryx, Keryx just closed. Following this, I installed the packages manually using dpkg, and as far as I can tell, this was successful: kieran@ubuntu:~$ cd /host/wifi/Keryx/keryx/projects/Kieran/packages kieran@ubuntu:/host/wifi/Keryx/keryx/projects/Kieran/packages$ sudo dpkg -i *.deb [sudo] password for kieran: Selecting previously deselected package bcmwl-kernel-source. (Reading database ... 118296 files and directories currently installed.) Unpacking bcmwl-kernel-source (from bcmwl-kernel-source_5.60.48.36+bdcom-0ubuntu5_i386.deb) ... Selecting previously deselected package dkms. Unpacking dkms (from dkms_2.1.1.2-3ubuntu1.1_all.deb) ... Selecting previously deselected package fakeroot. Unpacking fakeroot (from fakeroot_1.14.4-1ubuntu1_i386.deb) ... Selecting previously deselected package linux-image. Unpacking linux-image (from linux-image_2.6.35.22.23_i386.deb) ... Selecting previously deselected package menu. Unpacking menu (from menu_2.1.44ubuntu1_i386.deb) ... Selecting previously deselected package patch. Unpacking patch (from patch_2.6-2ubuntu1_i386.deb) ... Setting up fakeroot (1.14.4-1ubuntu1) ... update-alternatives: using /usr/bin/fakeroot-sysv to provide /usr/bin/fakeroot (fakeroot) in auto mode. Setting up linux-image (2.6.35.22.23) ... Setting up menu (2.1.44ubuntu1) ... Setting up patch (2.6-2ubuntu1) ... Processing triggers for man-db ... Setting up dkms (2.1.1.2-3ubuntu1.1) ... Setting up bcmwl-kernel-source (5.60.48.36+bdcom-0ubuntu5) ... Loading new bcmwl-5.60.48.36+bdcom DKMS files... First Installation: checking all kernels... Building only for 2.6.35-22-generic Building for architecture i686 Building initial module for 2.6.35-22-generic Done. wl.ko: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/2.6.35-22-generic/updates/dkms/ depmod..... DKMS: install Completed. update-initramfs: deferring update (trigger activated) Processing triggers for install-info ... Processing triggers for doc-base ... Processing 31 changed 1 added doc-base file(s)... Registering documents with scrollkeeper... Processing triggers for menu ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.35-22-generic Warning: No support for locale: en_GB.utf8 After rebooting, however, there were still no wireless networks in the NetworkManager Applet list. I opened the file /var/lib/NetworkManager/NetworkManager.state, and both NetworkEnabled and WirelessEnabled were set to True. While i'm very concious I may be asking a stupid question here, both my friend and I have nothing left to suggest, and as such - I would be very grateful for any answers as to how to get wireless working.

    Read the article

  • Cloud Backup: Getting the Users' Backs Up

    - by Tony Davis
    On Wednesday last week, Microsoft announced that as of July 1, all data transfers into its Microsoft Azure cloud will be free (though you have to pay for transferring data out). On Thursday last week, SQL Azure in Western Europe went down. It was a relatively short outage, but since SQL Azure currently provides no easy way to take a standard backup of a database and store it locally, many people had no recourse but to wait patiently for their cloud-based app to resume. It seems that Microsoft are very keen encourage developers to move their data onto their cloud, but are developers ready to do it, given that such basic backup capabilities are lacking? Recently on Simple-Talk, Mike Mooney described a perfect use case for the Microsoft Cloud. They had a simple web-based application with a SQL Server backend; they could move the application to Windows Azure, and the data into SQL Azure and in the process free themselves from much of the hassle surrounding management and scaling of the hardware, network and so on. It was a great fit and yet it nearly didn't happen; lack of support for the BACKUP command almost proved a show-stopper. Of course, backups of Azure databases are always and have always been taken automatically, for disaster recovery purposes, but these are strictly on-cloud copies and as of now it is not possible to use them to them to restore a database to a particular point in time. It seems that none of those clever Microsoft people managed to predict the need to perform basic backups of Azure databases so that copies could be stored locally, outside the Azure universe. At the very least, as Mike points out, performing a local backup before a new deployment is more or less mandatory. Microsoft did at least note the sound of gnashing teeth and, as a stop-gap measure, offered SQL Azure Database Copy which basically allows you to create an online clone of your database, but this doesn't allow for storing local archives of the data. To that end MS has provided SQL Azure Import/Export, to package up and export a database and its data, using BACPACs. These BACPACs do not guarantee transactional consistency; for example, if a child table is modified after the parent is copied, then the copied database will be in inconsistent state (meaning, to add to the fun, BACPACs need to be created from a database copy). In any event, widespread problems with BACPAC's evil cousin, the DACPAC have been well-documented, and it seems likely that many will also give BACPAC the bum's rush. Finally, in a TechEd 2011 presentation tagged "SQL Azure Advanced Administration", it was announced that "backup and restore" were coming in the next SQL Azure CTP. And yet this still doesn't mean that we'll get simple backups as DBAs know and love them. What it does mean, at least, is the ability to restore any given database to a point in time within a 2-week window. For the time being, if you want a local copy of your data and don't want to brave the BACPAC, one is left with SSIS or BCP, creative use of schema and data comparison tools, or use of SQL Azure Backup (currently in beta) in order to perform this simple but vital task. Cheers, Tony.

    Read the article

  • Workflow versioning

    - by Nitra
    I believe I have a fundamental misunderstanding when it comes to workflow engines which I would appreciate if you could help me sort out. I'm not sure if my misunderstanding is specific to the workflow engine I'm using, or if it's a general misunderstanding. I happen to use Windows Workflow Foundation (WWF). TLDR-version WWF allows you to implement business processes in long-running workflows (think months or even years). When started, the workflows can't be changed. But what business process can't change at any time? And if a business process changes, wouldn't you want your software to reflect this change for already started 'instances' of the business process? What am I missing? Background In WWF you define a workflow by combining a set of activites. There are different types of activities - some of them are for flow control, such as the IfElseActivity and the WhileActivty while others allows you to perform actual tasks, such as the CodeActivity wich allows you to run .NET code and the InvokeWebServiceActivity which allows you to call web services. The activites are combined to a workflow using a visual designer. You pretty much drag-and-drop activities from a toolbox to a designer area and connect the activites to each other. The workflow and activities have input paramters, output parameters and variables. We have a single workflow which sometimes runs in a matter of a few days, but it may run for 5-6 months. WWF takes care of persisting the workflow state (what activity are we currently executing, what are the variable values and so on). So far I think WWF makes sense. Some people will prefer to implement a software representation of a business process using a visual designer over writing all of it in code. So what's the issue then? What I don't really get is the following: WWF is designed to take care of long-running workflows. But at the same time, WWF has no built-in functionality which allows you to modify the running workflows. So if you model a business process using a workflow and run that for 6 months, you better hope that the business process does not change. Because if it do, you'll have to have multiple versions of the workflow executing at the same time. This seems like a fundamental design mistake to me, but at the same time it seems more likely that I've misunderstood something. For us, this has had some real-world effects: We release new versions every month, but some workflows may run for a year. This means that we have several versions of the workflow running in parallell, in other words several versions of the business logics. This is the same as having many differnt versions of your code running in production in the same system at the same time, which becomes a bit hard to understand for users. (depending on on whether they clicked a 'Start' button 9 or 10 months ago, the software will behave differently) Our workflow refers to different types of entities and since WWF now has persisted and serialized these we can't really refactor the entities since then existing workflows can't be resumed (deserialization will fail We've received some suggestions on how to handle this When we create a new version of the workflow, cancel all running workflows and create new ones. But in our workflows there's a lot of manual work involved and if we start from scratch a lot of people has to re-do their work. Track what has been done in the workflow and when you create a new one skip activites which have already been executed. I feel that this alternative may work for simple workflows, but it becomes hairy to automatically figure out what activities to skip if there's major refactoring done to a workflow. When we create a new version of the workflow, upgrade old versions using the new WWF 4.5 functionality for upgrading workflows. But then we would have to skip using the visual designer and write code to inject activities in the right places in the workflow. According to MSDN, this upgrade functionality is only intended for minor bug fixes and not larger changes. What am I missing?

    Read the article

  • Windows Azure Recipe: Consumer Portal

    - by Clint Edmonson
    Nearly every company on the internet has a web presence. Many are merely using theirs for informational purposes. More sophisticated portals allow customers to register their contact information and provide some level of interaction or customer support. But as our understanding of how consumers use the web increases, the more progressive companies are taking advantage of social web and rich media delivery to connect at a deeper level with the consumers of their goods and services. Drivers Cost reduction Scalability Global distribution Time to market Solution Here’s a sketch of how a Windows Azure Consumer Portal might be built out: Ingredients Web Role – this will host the core of the solution. Each web role is a virtual machine hosting an application written in ASP.NET (or optionally php, or node.js). The number of web roles can be scaled up or down as needed to handle peak and non-peak traffic loads. Database – every modern web application needs to store data. SQL Azure databases look and act exactly like their on-premise siblings but are fault tolerant and have data redundancy built in. Access Control (optional) – if identity needs to be tracked within the solution, the access control service combined with the Windows Identity Foundation framework provides out-of-the-box support for several social media platforms including Windows LiveID, Google, Yahoo!, Facebook. It also has a provider model to allow integration with other platforms as well. Caching (optional) – for sites with high traffic with lots of read-only data and lists, the distributed in-memory caching service can be used to cache and serve up static data at higher scale and speed than direct database requests. It can also be used to manage user session state. Blob Storage (optional) – for sites that serve up unstructured data such as documents, video, audio, device drivers, and more. The data is highly available and stored redundantly across data centers. Each entry in blob storage is provided with it’s own unique URL for direct access by the browser. Content Delivery Network (CDN) (optional) – for sites that service users around the globe, the CDN is an extension to blob storage that, when enabled, will automatically cache frequently accessed blobs and static site content at edge data centers around the world. The data can be delivered statically or streamed in the case of rich media content. Training Labs These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Programmatically disclosing a node in af:tree and af:treeTable

    - by Frank Nimphius
    A common developer requirement when working with af:tree or af:treeTable components is to programmatically disclose (expand) a specific node in the tree. If the node to disclose is not a top level node, like a location in a LocationsView -> DepartmentsView -> EmployeesView hierarchy, you need to also disclose the node's parent node hierarchy for application users to see the fully expanded tree node structure. Working on ADF Code Corner sample #101, I wrote the following code lines that show a generic option for disclosing a tree node starting from a handle to the node to disclose. The use case in ADF Coder Corner sample #101 is a drag and drop operation from a table component to a tree to relocate employees to a new department. The tree node that receives the drop is a department node contained in a location. In theory the location could be part of a country and so on to indicate the depth the tree may have. Based on this structure, the code below provides a generic solution to parse the current node parent nodes and its child nodes. The drop event provided a rowKey for the tree node that received the drop. Like in af:table, the tree row key is not of type oracle.jbo.domain.Key but an implementation of java.util.List that contains the row keys. The JUCtrlHierBinding class in the ADF Binding layer that represents the ADF tree binding at runtime provides a method named findNodeByKeyPath that allows you to get a handle to the JUCtrlHierNodeBinding instance that represents a tree node in the binding layer. CollectionModel model = (CollectionModel) your_af_tree_reference.getValue(); JUCtrlHierBinding treeBinding = (JUCtrlHierBinding ) model.getWrappedData(); JUCtrlHierNodeBinding treeDropNode = treeBinding.findNodeByKeyPath(dropRowKey); To disclose the tree node, you need to create a RowKeySet, which you do using the RowKeySetImpl class. Because the RowKeySet replaces any existing row key set in the tree, all other nodes are automatically closed. RowKeySetImpl rksImpl = new RowKeySetImpl(); //the first key to add is the node that received the drop //operation (departments).            rksImpl.add(dropRowKey);    Similar, from the tree binding, the root node can be obtained. The root node is the end of all parent node iteration and therefore important. JUCtrlHierNodeBinding rootNode = treeBinding.getRootNodeBinding(); The following code obtains a reference to the hierarchy of parent nodes until the root node is found. JUCtrlHierNodeBinding dropNodeParent = treeDropNode.getParent(); //walk up the tree to expand all parent nodes while(dropNodeParent != null && dropNodeParent != rootNode){    //add the node's keyPath (remember its a List) to the row key set    rksImpl.add(dropNodeParent.getKeyPath());      dropNodeParent = dropNodeParent.getParent(); } Next, you disclose the drop node immediate child nodes as otherwise all you see is the department node. Its not quite exactly "dinner for one", but the procedure is very similar to the one handling the parent node keys ArrayList<JUCtrlHierNodeBinding> childList = (ArrayList<JUCtrlHierNodeBinding>) treeDropNode.getChildren();                     for(JUCtrlHierNodeBinding nb : childList){   rksImpl.add(nb.getKeyPath()); } Next, the row key set is defined as the disclosed row keys on the tree so when you refresh (PPR) the tree, the new disclosed state shows tree.setDisclosedRowKeys(rksImpl); AdfFacesContext.getCurrentInstance().addPartialTarget(tree.getParent()); The refresh in my use case is on the tree parent component (a layout container), which usually shows the best effect for refreshing the tree component. 

    Read the article

  • Valuing "Working Software over Comprehensive Documentation"

    - by tom.spitz
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} I subscribe to the tenets put forth in the Manifesto for Agile Software Development - http://agilemanifesto.org. As Oracle's chief methodologist, that might seem a self-deprecating attitude. After all, the agile manifesto tells us that we should value "individuals and interactions" over "processes and tools." My job includes process development. I also subscribe to ideas put forth in a number of subsequent works including Balancing Agility and Discipline: A Guide for the Perplexed (Boehm/Turner, Addison-Wesley) and Agile Project Management: Creating Innovative Products (Highsmith, Addison-Wesley). Both of these books talk about finding the right balance between "agility and discipline" or between a "predictive and adaptive" project approach. So there still seems to be a place for us in creating the Oracle Unified Method (OUM) to become the "single method framework that supports the successful implementation of every Oracle product." After all, the real idea is to apply just enough ceremony and produce just enough documentation to suit the needs of the particular project that supports an enterprise in moving toward its desired future state. The thing I've been struggling with - and the thing I'd like to hear from you about right now - is the prevalence of an ongoing obsession with "documents." OUM provides a comprehensive set of guidance for an iterative and incremental approach to engineering and implementing software systems. Our intent is first to support the information technology system implementation and, as necessary, support the creation of documentation. OUM, therefore, includes a supporting set of document templates. Our guidance is to employ those templates, sparingly, as needed; not create piles of documentation that you're not gonna (sic) need. In other words, don't serve the method, make the method serve you. Yet, there seems to be a "gimme" mentality in some circles that if you give me a sample document - or better yet - a repository of samples - then I will be able to do anything cheaply and quickly. The notion is certainly appealing AND reuse can save time. Plus, documents are a lowest common denominator way of packaging reusable stuff. However, without sustained investment and management I've seen "reuse repositories" turn quickly into garbage heaps. So, I remain a skeptic. I agree that providing document examples that promote consistency is helpful. However, there may be too much emphasis on the documents themselves and not enough on creating a system that meets the evolving needs of the business. How can we shift the emphasis toward working software and away from our dependency on documents - especially on large, complex implementation projects - while still supporting the need for documentation? I'd like to hear your thoughts.

    Read the article

  • Accounts in Work Items after migration to TFS 2010 and to new domain

    - by Clara Oscura
    Lately I’ve been doing some tests on migrating our TFS 2008 installation to TFS 2010, coupled with a machine and domain change. One particular topic that was tricky is user accounts. We installed first a new machine with TFS 2010 and then migrated the projects in the old server. The work items were migrated with the projects. Great, but if I try to edit one of the old work items I cannot save it anymore because some fields contain old user names (ex. OLDDOMAIN\user) which are not known in the new domain (it should be NEWDOMAIN\user). The errors look like this: When I correct the ‘Assigned To’ field value, I get another error regarding another field: Before TFS 2010, we had TFSUsers power tool. It allow you to map an old user name to a new user name. This is not available anymore because WI fields with user accounts are now synchronized with AD display names changes (explained here). The correct way to go about this in TFS 2010 is to use TFSConfig Identities before adding the new domain accounts into the TFS groups (documented here). So, too late for us. I’ve found a (tedious) workaround to change those old account in work items in order to allow people to keep working with them. 1. Install TFS 2010 power tools 2. Export WIT from your project (VS | Tools | Process Editor | Work Item Types). Save the definition, for example: Original_MyProject_Task.xml 3. Copy the xml (NoReadOnly_MyProject_Task.xml) and edit it. From the field definition of ‘Activated By’, ‘Closed By’ and ‘Resolved By’, remove the following:        <WHENNOTCHANGED field="System.State">           <READONLY />         </WHENNOTCHANGED> 4. Import WIT in VS. Choose the new file (NoReadOnly_MyProject_Task.xml) and import it in MyProject 5. Open all tasks in Excel (flat list). Display the following columns: Asssigned To Activated By Closed By Resolved By Change the user accounts to the new ones (I usually sort each column alphabetically to make it easier). 6. Publish. If you get a conflict on a field, tough luck. You will have to manually choose “Local version” for each work item. I told you it was a tedious process. 7. Import original WIT (Original_MyProject_Task.xml) in MyProject. We only changed the WI definition so that we could change some fields. The original definition should be put back. And what about these other fields? Created By Authorized As These fields are not editable by definition (VS | Tools | Process Editor | Work Item Fields Explorer), even if they are not marked as read-only in the WIT. You can leave the old values. It doesn’t seem to matter to TFS. The other four fields are editable by definition, so only the WIT readonly rule prevents us from changing them. Technorati Tags: TFS,Team Foundation Server 2010,Work Item,Domain change

    Read the article

  • Ask HTG: How Can I Check the Age of My Windows Installation?

    - by Jason Fitzpatrick
    Curious about when you installed Windows and how long you’ve been chugging along without a system refresh? Read on as we show you a simple way to see how long-in-the-tooth your Windows installation is. Dear How-To Geek, It feels like it has been forever since I installed Windows 7 and I’m starting to wonder if some of the performance issues I’m experiencing have something to do with how long ago it was installed. It isn’t crashing or anything horrible, mind you, it just feels slower than it used to and I’m wondering if I should reinstall it to wipe the slate clean. Is there a simple way to determine the original installation date of Windows on its host machine? Sincerely, Worried in Windows Although you only intended to ask one question, you actually asked two. Your direct question is an easy one to answer (how to check the Windows installation date). The indirect question is, however, a little trickier (if you need to reinstall Windows to get a performance boost). Let’s start off with the easy one: how to check your installation date. Windows includes a handy little application just for the purposes of pulling up system information like the installation date, among other things. Open the Start Menu and type cmd in the run box (or, alternatively, press WinKey+R to pull up the run dialog and enter the same command). At the command prompt, type systeminfo.exe Give the application a moment to run; it takes around 15-20 seconds to gather all the data. You’ll most likely need to scroll back up in the console window to find the section at the top that lists operating system stats. What you care about is Original Install Date: We’ve been running the machine we tested the command on since August 23 2009. For the curious, that’s one month and a day after the initial public release of Windows 7 (after we were done playing with early test releases and spent a month mucking around in the guts of Windows 7 to report on features and flaws, we ran a new clean installation and kept on trucking). Now, you might be asking yourself: Why haven’t they reinstalled Windows in all that time? Haven’t things slowed down? Haven’t they upgraded hardware? The truth of the matter is, in most cases there’s no need to completely wipe your computer and start from scratch to resolve issues with Windows and, if you don’t bog your system down with unnecessary and poorly written software, things keep humming along. In fact, we even migrated this machine from a traditional mechanical hard drive to a newer solid-state drive back in 2011. Even though we’ve tested piles of software since then, the machine is still rather clean because 99% of that testing happened in a virtual machine. That’s not just a trick for technology bloggers, either, virtualizing is a handy trick for anyone who wants to run a rock solid base OS and avoid the bog-down-and-then-refresh cycle that can plague a heavily used machine. So while it might be the case that you’ve been running Windows 7 for years and heavy software installation and use has bogged your system down to the point a refresh is in order, we’d strongly suggest reading over the following How-To Geek guides to see if you can’t wrangle the machine into shape without a total wipe (and, if you can’t, at least you’ll be in a better position to keep the refreshed machine light and zippy): HTG Explains: Do You Really Need to Regularly Reinstall Windows? PC Cleaning Apps are a Scam: Here’s Why (and How to Speed Up Your PC) The Best Tips for Speeding Up Your Windows PC Beginner Geek: How to Reinstall Windows on Your Computer Everything You Need to Know About Refreshing and Resetting Your Windows 8 PC Armed with a little knowledge, you too can keep a computer humming along until the next iteration of Windows comes along (and beyond) without the hassle of reinstalling Windows and all your apps.         

    Read the article

  • Best Practices - Core allocation

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (also called Logical Domains) Introduction SPARC T-series servers currently have up to 4 CPU sockets, each of which has up to 8 or (on SPARC T3) 16 CPU cores, while each CPU core has 8 threads, for a maximum of 512 dispatchable CPUs. The defining feature of Oracle VM Server for SPARC is that each domain is assigned CPU threads or cores for its exclusive use. This avoids the overhead of software-based time-slicing and emulation (or binary rewriting) of system state-changing privileged instructions used in traditional hypervisors. To create a domain, administrators specify either the number of CPU threads or cores that the domain will own, as well as its memory and I/O resources. When CPU resources are assigned at the individual thread level, the logical domains constraint manager attempts to assign threads from the same cores to a domain, and avoid "split core" situations where the same CPU core is used by multiple domains. Sometimes this is unavoidable, especially when domains are allocated and deallocated CPUs in small increments. Why split cores can matter Split core allocations can silenty reduce performance because multiple domains with different address spaces and memory contents are sharing the core's Level 1 cache (L1$). This is called false cache sharing since even identical memory addresses from different domains must point to different locations in RAM. The effect of this is increased contention for the cache, and higher memory latency for each domain using that core. The degree of performance impact can be widely variable. For applications with very small memory working sets, and with I/O bound or low-CPU utilization workloads, it may not matter at all: all machines wait for work at the same speed. If the domains have substantial workloads, or are critical to performance then this can have an important impact: This blog entry was inspired by a customer issue in which one CPU core was split among 3 domains, one of which was the control and service domain. The reported problem was increased I/O latency in guest domains, but the root cause might be higher latency servicing the I/O requests due to the control domain being slowed down. What to do about it Split core situations are easily avoided. In most cases the logical domain constraint manager will avoid it without any administrative action, but it can be entirely prevented by doing one of the several actions: Assign virtual CPUs in multiples of 8 - the number of threads per core. For example: ldm set-vcpu 8 mydomain or ldm add-vcpu 24 mydomain. Each domain will then be allocated on a core boundary. Use the whole core constraint when assigning CPU resources. This allocates CPUs in increments of entire cores instead of virtual CPU threads. The equivalent of the above commands would be ldm set-core 1 mydomain or ldm add-core 3 mydomain. Older syntax does the same thing by adding the -c flag to the add-vcpu, rm-vcpu and set-vcpu commands, but the new syntax is recommended. When whole core allocation is used an attempt to add cores to a domain fails if there aren't enough completely empty cores to satisfy the request. See https://blogs.oracle.com/sharakan/entry/oracle_vm_server_for_sparc4 for an excellent article on this topic by Eric Sharakan. Don't obsess: - if the workloads have minimal CPU requirements and don't need anywhere near a full CPU core, then don't worry about it. If you have low utilization workloads being consolidated from older machines onto a current T-series, then there's no need to worry about this or to assign an entire core to domains that will never use that much capacity. In any case, make sure the most important domains have their own CPU cores, in particular the control domain and any I/O or service domain, and of course any important guests. Summary Split core CPU allocation to domains can potentially have an impact on performance, but the logical domains manager tends to prevent this situation, and it can be completely and simply avoided by allocating virtual CPUs on core boundaries.

    Read the article

  • Package Version Numbers, why are they so important

    - by Chris W Beal
    One of the design goals of IPS has been to allow people to easily move forward to a supported "Surface" of component. That is to say, when you  # pkg update your system, you get the latest set of components which all work together, based on the packages you already have installed. During development, this has meant simply you update to the latest "build" of the components. (During development, we build everything and publish everything every two weeks). Now we've released Solaris 11 using the IPS technologies, things are a bit more complicated. We need to be able to reflect all the types of Solaris release we are doing. For example Solaris Development builds, Solaris Update builds and "Support Repository Updates" (the replacement for patches) in the version scheme. So simply saying "151" as the build number isn't sufficient to articulate what you are running, or indeed what is available to update to In my previous blog post I talked about creating your own package, and gave an example FMRI of pkg://tools/[email protected],0.5.11-0.0.0 But it's probably more instructive to look at the FMRI of a Solaris package. The package "core-os" contains all the common utilities and daemons you need to use Solaris.  $ pkg info core-os Name: system/core-os Summary: Core Solaris Description: Operating system core utilities, daemons, and configuration files. Category: System/Core State: Installed Publisher: solaris Version: 0.5.11 Build Release: 5.11 Branch: 0.175.0.0.0.2.1 Packaging Date: Wed Oct 19 07:04:57 2011 Size: 25.14 MB FMRI: pkg://solaris/system/[email protected],5.11-0.175.0.0.0.2.1:20111019T070457Z The FMRI is what we will concentrate on here. In this package "solaris" is the publisher. You can use the pkg publisher command to see where the solaris publisher gets it's bits from $ pkg publisher PUBLISHER TYPE STATUS URI solaris origin online http://pkg.oracle.com/solaris/release/ So we can see we get solaris packages from pkg.oracle.com.  The package name is system/core-os. These can be arbitrary length, just to allow you to group similar packages together. Now on the the interesting? bit, the versions, everything after the @ is part of the version. IPS will only upgrade to a "higher" version. [email protected],5.11-0.175.0.0.0.2.1:20111019T070457Z core-os = Package Name0.5.11 = Component - in this case we're saying it's a SunOS 5.11 package, = separator5.11 = Built on version - to indicate what OS version you built the package on- = another separator0.175.0.0.0.2.1 = Branch Version : = yet another separator20111019T070457Z = Time stamp when the package was published So from that we can see the Branch Version seems rather complex. It is necessarily so, to allow us to describe the hierachy of releases we do In this example we see the following 0.175: is known as the trunkid, and is incremented each build of a new release of Solaris. During Solaris 11 this should not change  0: is the Update release for Solaris. 0 for FCS, 1 for update 1 etc 0: is the SRU for Solaris. 0 for FCS, 1 for SRU 1 etc 0: is reserved for future use 2: Build number of the SRU 1: Nightly ID - only important for Solaris developersTake a hypothetical example [email protected],5.11-0.175.1.5.0.4.1:<something> This would be build 4 of SRU 5 of Update 1 of Solaris 11 This is actually documented in a MOS article 1378134.1 Which you can read if you have a support contract.

    Read the article

  • Understanding the Customer Form in Release 12 from an AR Perspective!!

    - by user793553
    Confused by the Customer Form in Release 12??  Read on, to get some insight into the evolution of this screen, and how it links in with Trading Community Architecture. Historically, the customer data model was owned by Oracle Receivables (AR).  However, as the data model changed and more complex relationships and attributes had to be tracked and monitored, the Trading Community Architecture (TCA) product was created.  All applications within the E-Business suite that require interaction with a customer integrate with TCA. Customer information is no longer stored in the individual applications but rather in a central repository/registry maintained within TCA.  It is important to understand the following entities/concepts stored in TCA: Party: A party is an entity with whom you can have a potential business relationship.  A party can be either a Person or an Organization.  The Party entity is completely independent of any business relationship; this means that a Party can exist even if you have no transactions with it.   The Party is the "umbrella" entity under which you capture all other attributes listed below. Customer: A customer is a party with whom you have an existing business relationship.  From an AR perspective, you can simplify the concepts by thinking of a Customer as a Party. This definition however does not apply to all other applications. In the Oracle Receivables Customer form, the information displayed at the Customer level is from TCA's Party information record. Customer Account (also called Account): An account contains information about how you transact business with a particular customer.  You can create multiple accounts for a customer.  When you create invoices and receipts you associate it to a particular Account of a Customer. Location: A Location is an address.  It is a point in space, typically identified by a street number, a street name, a city, a state or province, a country.  A location is independent of what it is used for - you do not associate a purpose to a location. Party Site: A Party Site is associated to a Party.  It is the location where a party is physically located.  When defining sites for a Party, only one can be an identifying address.  However, you can define other party sites associated to a party. You can define purposes/usage for Party Sites. Account Site: An Account Site is associated to a Customer Account. It is the location associated to the account you are transacting business with. You can define business purposes (also called site uses) for an Account site. Read more about the Customer Workbench in these notes: Doc ID 1436547.1 Oracle Receivables: Understanding the Customer Form in Release 12 Doc ID  1437866.1 Customer Form - Address: Troubleshooting, Known Issues and Patches Doc ID  1448442.1 Oracle Receivables (AR): Customer Workbench Information Center Do you find this type of blog entry useful?  Please add comments to let us know how we can help you more effectively.  Thank you!

    Read the article

  • Slick2D Rendering Lots of Polygons

    - by Hazzard
    I'm writing an little isometric game using Slick. The world terrain is made up of lots of quadrilaterals. In a small world that is 128 by 128 squares, over 16,000 quadrilaterals need to be rendered. This puts my pretty powerful computer down to 30 fps. I've though about caching "chunks" of the world so only single chunks would ever need updating at a time, but I don't know how to do this, and I am sure there are other ways to optimize it besides that. Maybe I'm doing the whole thing wrong, surely fancy 3D games that run fine on my machine are more intensive than this. My question is how can I improve the FPS and am I doing something wrong? Or does it actually take that much power to render those polygons? -- Here is the source code for the render method in my game state. It iterates through a 2d array or heights and draws polygons based on the height. public void render(GameContainer container, StateBasedGame game, Graphics gfx) throws SlickException { gfx.translate(offsetX * d + container.getWidth() / 2, offsetY * d + container.getHeight() / 2); gfx.scale(d, d); for (int y = 0; y < placeholder.length; y++) {// x & y are isometric // diag for (int x = 0; x < placeholder[0].length; x++) { Polygon poly; int hor = TestState.TILE_WIDTH * (x - y);// hor and ver are orthagonal int W = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y + 1][x];//points to go off of int S = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y + 1][x + 1]; int E = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y][x + 1]; int N = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y][x]; if (placeholder[y][x] == null) { poly = new Polygon();//Create actual surface polygon poly.addPoint(-TestState.TILE_WIDTH + hor, W); poly.addPoint(hor, S + TestState.TILE_HEIGHT); poly.addPoint(TestState.TILE_WIDTH + hor, E); poly.addPoint(hor, N - TestState.TILE_HEIGHT); float z = ((float) heights[y][x + 1] - heights[y + 1][x]) / 32 + 0.5f; placeholder[y][x] = new Tile(poly, new Color(z, z, z)); //ShapeRenderer.fill(placeholder[y][x]); } if (true) {//ONLY draw tile if it's on screen gfx.setColor(placeholder[y][x].getColor()); ShapeRenderer.fill(placeholder[y][x]); //gfx.fill(placeholder[y][x]); //placeholder[y][x]. //DRAW EDGES if (y + 1 == placeholder.length) {//draw South foundation edges gfx.setColor(Color.gray); Polygon found = new Polygon(); found.addPoint(-TestState.TILE_WIDTH + hor, W); found.addPoint(hor, S + TestState.TILE_HEIGHT); found.addPoint(hor, TestState.TILE_HEIGHT * (x + y + 1)); found.addPoint(-TestState.TILE_WIDTH + hor, TestState.TILE_HEIGHT * (x + y)); gfx.fill(found); } if (x + 1 == placeholder[0].length) {//north gfx.setColor(Color.darkGray); Polygon found = new Polygon(); found.addPoint(TestState.TILE_WIDTH + hor, E); found.addPoint(hor, S + TestState.TILE_HEIGHT); found.addPoint(hor, TestState.TILE_HEIGHT * (x + y + 1)); found.addPoint(TestState.TILE_WIDTH + hor, TestState.TILE_HEIGHT * (x + y)); gfx.fill(found); }//*/ } } } }

    Read the article

  • Upgrade issues due to broken "dependency problems prevent configuration of linux-image-generic" error

    - by tsukune1791
    okay, I've recently upgrade from 11.10 to 12.04 and I've been having some issues. I don't know if its a bug or not, but I thought I would submit it here. Okay here's a little background; I ran the distro update from the update manager and got a couple errors that I didn't catch. the computer restarted, and when I logged the Launcher and my top bar of the Ubuntu desktop didn't load. While it was trying to load a couple error messages came up, I think they were called "apport", saying they couldn't send the bug information for some reason. I believe it said somethings wrong with my internet connection, but nothing's wrong with it. Anyway I tried running some things in terminal, namely sudo apt-get -f install sudo apt-get upgrade sudo apt-get dist-upgrade and keep getting the following errors; dustin@marceau-laptop:~$ sudo apt-get dist-upgrade [sudo] password for dustin: Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y Setting up initramfs-tools (0.99ubuntu13) ... update-initramfs: deferring update (trigger activated) Setting up linux-image-3.2.0-24-generic (3.2.0-24.37) ... Running depmod. update-initramfs: deferring update (hook will be called later) Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/dkms 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic update-initramfs: Generating /boot/initrd.img-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/zz-runlilo 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic Fatal: No images have been defined. run-parts: /etc/kernel/postinst.d/zz-runlilo exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-24-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-24-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-24-generic; however: Package linux-image-3.2.0-24-generic is not configured yet. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.24.26); however: Package linux-image-generic is not configured yet. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured Processing triggers for initramfs-tools ... No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. update-initramfs: Generating /boot/initrd.img-3.2.0-24-generic Fatal: No images have been defined. run-parts: /etc/initramfs/post-update.d//runlilo exited with return code 1 dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-24-generic linux-image-generic linux-generic initramfs-tools localepurge: Disk space freed in /usr/share/locale: 0 KiB localepurge: Disk space freed in /usr/share/man: 0 KiB localepurge: Disk space freed in /usr/share/gnome/help: 0 KiB localepurge: Disk space freed in /usr/share/omf: 0 KiB localepurge: Disk space freed in /usr/share/doc/kde/HTML: 0 KiB Total disk space freed by localepurge: 0 KiB E: Sub-process /usr/bin/dpkg returned an error code (1) And my Ubuntu desktop is still not working. I can log into Gnome and Ubuntu 2D but the Launcher, I think it's call, doesn't load. Can someone help me fix these error, or point me in the right direction to get them fixed? It is much appriciated.

    Read the article

< Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >