Search Results

Search found 23472 results on 939 pages for 'show'.

Page 664/939 | < Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >

  • How to migrate ASP.NET MVC 3 , MVC4 project to ASP.NET MVC5 ?

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/10/16/how-to-migrate-asp.net-mvc-3--mvc4-project-to.aspxSoon you will see a new version of MVC5 in VS2013. MVC5 will be incorporated in VS2013. MVC3 will not be supported in VS2013. I confirmed it on channel9 last time. So People who have installed only VS2013 or doesn’t have old version will be got trouble with the project that is still in MVC3. This error happen because MVC4 and 5 installation doesn’t contain the DLL that is used in Version 3 of ASP.NET MVC.   Don’t be panic. You guys want to upgrade your project. Here is a trick  to solve the issue.   When you open the project you have seen that in Reference there is some dll that have yellow icon. This means that dll are missing or not found in your configuration or system.   Now remember that dll name. Remove them from reference and add them from adding reference. I telling you to remove so VS will not prevent you to add new version of same assembly. Add all those assembly. Those dll will be following : System.Web.Mvc Razor and Webpages Dll.   Remember that in MVC3 we use old version of these assembly. Now When you done by adding all assembly then now open web.config.   There is 2 web.config file in our mvc project.  One is in root folder and second in Views folder. You need to update all those version no. This is not a big deal if you know the name of assembly. Now if you web.config show you assembly version as 3.000.00 then 3 would be replaced with 4 or 5 according to version no. Same thing need to applied all dll for both web.config.   Note :- In VS Template Views goes in ~/Views folder but if someone use any other folder then Views for views and those folder have also web.config then remember to update them also. Your project will be compile and make no warning and error but that certainly not work. for examples areas/views and themes/views that contain web.config also need to be updated with newer assembly version no.   After done these thing you can compile your project and it will be work as it should be Thanks for read my post. Follow me on FB and Twitter to stay updated

    Read the article

  • ASP.Net MVC - how to post values to the server that are not in an input element

    - by David Carter
    Problem As was mentioned in a previous blog I am building a web page that allows the user to select dates in a calendar and then shows the dates in an unordered list. The problem now is that those dates need to be sent to the server on page submit so that they can be saved to the database. If I was storing the dates in an input element, say a textbox, that wouldn't be an issue but because they are in an html element whose contents are not posted to the server an alternative strategy needs to be developed. Solution The approach that I took to solve this problem is as follows: 1. Place a hidden input field on the form <input id="hiddenDates" name="hiddenDates" type="hidden" value="" /> ASP.Net MVC has an Html helper with a method called Hidden() that will do this for you @Html.Hidden("hiddenDates"). 2. Copy the values from the html element to the hidden input field before submitting the form The following javascript is added to the page:        $(function () {          $('#formCreate').submit(function () {               PopulateHiddenDates();          });        });            function PopulateHiddenDates() {          var dateValues = '';          $($('#dateList').children('li')).each(function(index) {             dateValues += $(this).attr("id") + ",";          });          $('#hiddenDates').val(dateValues);        } I'm using jQuery to bind to the form submit event so that my method to populate the hidden field gets called before the form is submitted. The dateList element is an unordered list and by using the jQuery each function I can itterate through all the <li> items that it contains, get each items id attribute (to which I have assigned the value of the date in millisecs) and write them to the hidden field as a comma delimited string. 3. Process the dates on the server        [HttpPost]         public ActionResult Create(string hiddenDates, string utcOffset)         {            List<DateTime> dates = GetDates(hiddenDates, utcOffset);         }         private List<DateTime> GetDates(string hiddenDates, int utcOffset)         {             List<DateTime> dates = new List<DateTime>();             var values = hiddenDates.Split(",".ToCharArray(),StringSplitOptions.RemoveEmptyEntries);             foreach (var item in values)             {                 DateTime newDate = new DateTime(1970, 1, 1).AddMilliseconds(double.Parse(item)).AddMinutes(utcOffset*-1);                 dates.Add(newDate);                }             return dates;         } By declaring a parameter with the same name as the hidden field ASP.Net will take care of finding the corresponding entry in the form collection posted back to the server and binding it to the hiddenDates parameter! Excellent! I now have my dates the user selected and I can save them to the database. I have also used the same technique to pass back a utcOffset so that I know what timezone the user is in and I can show the dates correctly to users in other timezones if necessary (this isn't strictly necessary at the moment but I plan to introduce times later), Saving multiple dates from an unordered list - DONE!

    Read the article

  • Computer Visionaries 2014 Kinect Hackathon

    - by T
    Originally posted on: http://geekswithblogs.net/tburger/archive/2014/08/08/computer-visionaries-2014-kinect-hackathon.aspxA big thank you to Computer Vision Dallas and Microsoft for putting together the Computer Visionaries 2014 Kinect Hackathon that took place July 18th and 19th 2014.  Our team had a great time and learned a lot from the Kinect MVP's and Microsoft team.  The Dallas Entrepreneur Center was a fantastic venue. In total, 114 people showed up to form 15 teams. Burger ITS & Friends team members with Ben Lower:  Shawn Weisfeld, Teresa Burger, Robert Burger, Harold Pulcher, Taylor Woolley, Cori Drew (not pictured), and Katlyn Drew (not pictured) We arrived Friday after a long day of work/driving.  Originally, our idea was to make a learning game for kids.  It was intended to be multi-simultaneous players dragging and dropping tiles into a canvas area for kids around 5 years old. We quickly learned that we were limited to two simultaneous players. After working on the game for the rest of the evening and into the next morning we decided that a fast multi-player game with hand gestures was not going to happen without going beyond what was provided with the API. If we were going to have something to show, it was time to switch gears. The next idea on the table was the Photo Anywhere Kiosk. The user can use voice and hand gestures to pick a place they would like to be.  After the user says a place (or anything they want) and then the word "search", the app uses Bing to display a bunch of images for him/her to choose from. With the use of hand gesture (grab and slide to move back and forth and push/pull to select an image) the user can get the perfect image to pose with. I couldn't get a snippet with the hand but when a the app is in use, a hand shows up to cue the user to use their hand to control it's movement. Once they chose an image, we use the Kinect background removal feature to super impose the user on that image. When they are in the perfect position, they say "save" to save the image. Currently, the image is saved in the images folder on the users account but there are many possibilities such as emailing it, posting to social media, etc.. The competition was great and we were honored to be recognized for third place. Other related posts: http://jasongfox.com/computer-visionaries-2014-incredible-success/ A couple of us are continuing to work on the kid's game and are going to make it a Windows 8 multi-player game without Kinect functionality. Stay tuned for more updates.

    Read the article

  • Friday Fun: Games that Look Like Productivity Apps

    - by Mysticgeek
    We’ve been showing you fun flash games to play during company time on a Friday afternoon. Hopefully while playing them, you haven’t received a “talking to”. Today we show you some cool games to play that look like productivity apps, so the boss will be none the wiser. The website cantyouseeimbusy.com has developed some very neat little games that look like productivity apps like Word and Excel. These apps look exactly like some project you would be working on, but are really neat little games. Here we take a look at three cool ones on the site called Breakdown, Leadership, and Cost Cutter. Leadership Leadership is a cool game that looks like something you would be working in Excel and is a spin off of the classic game Moon Lander. You navigate your ship through a variety of challenging line graphs. Breakdown This one is a knock off of the classic game Break Out. Use your mouse to scroll the racket at the bottom and bounce the ball off of the text in the document. Press the space bar to pause the game and the elements will disappear…good for when the boss comes around. Cost Cutter This one is a puzzle game where it looks like your working on some bar charts in Excel. You need to click combinations of two or more blocks that are the same color. Again, hit the spacebar and the game elements will disappear. If you’re looking for a way to goof off with some simple games without the boss knowing, these will definitely do the trick. Another cool game along these lines is Excit! which we covered previously. Play Cost Cutter, Breakdown, and Leadership at cantyouseeimbusy.com Similar Articles Productive Geek Tips Friday Fun: Get Your Mario OnFriday Fun: Bricks Breaking & Cube CrashFriday Fun: Fancy Pants AdventuresFriday Fun: GemCraft is a Totally Addictive Tower Defense GameFriday Fun: Five More Time Wasting Online Games TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Download Microsoft Office Help tab The Growth of Citibank Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI

    Read the article

  • middle-click on Thunderbird icon in Unity Launcher gives window without titlebar or menubar

    - by Mike Kupfer
    I expect that this is a (low-priority) bug, but apport-bug strongly encouraged me to come here first, so here I am... What I did: I started Thunderbird and then minimized the window. I then middle-clicked on the Thunderbird icon in the Unity (3D) Launcher. I do not have any of the appmenu packages installed (not indicator-appmenu, nor any of the *-globalmenu or appmenu-* packages). What I expected: I would get the Thunderbird main window back at its original location, or possibly I'd get a Compose Mail window somewhere on the desktop. (This was something of an experiment, so I wasn't really sure what to expect.) What happens: The Thunderbird main window appears in the upper left corner of the display, displacing the Launcher. This was not its previous location. The window has no titlebar or menubar. The top panel says "Thunderbird Mail", but moving the mouse over that text does nothing (doesn't show the close/minimize/maximize controls). I can still bring up the Launcher and start applications. If I start Firefox and give it input focus, clicking on the Thunderbird window leaves the focus with Firefox. I can use the Switcher to give Thunderbird the input focus. (Both the Unity Switcher and the Static Application Switcher work. If I use the Static Application Switcher, I see Thunderbird's menubar in the top panel until I release Alt-tab.) I can kill Thunderbird from the Launcher. I can also use the Unity Switcher to minimize everything. If I then left-click on the Thunderbird icon in the Launcher, the Thunderbird main window reappears in the upper left. But this time it does not displace the Launcher, and it has the proper titlebar and menubar. This does not happen with Unity 2D. And I haven't seen it with any other app. I realize that because I've disabled the appmenu stuff, I'm not getting the full Unity experience, and there might be some rough edges. But this is a bug, yes?

    Read the article

  • Choose the Text Editor Used to View Source Code in Internet Explorer

    - by Asian Angel
    Everyone has a favorite text editor that they like to use when viewing or working with source code. If you are unhappy with the default choice in Internet Explorer 8 then join us as we show you how to set up access to your favorite text editor. A Look at Before Here is Internet Explorer on our test system ready to help us view the source code for one of the pages here at the site. Perhaps “Notepad” is your default source code viewer… Or in the case of our test system where “EditPad Lite” was the default due to choices we made while installing it. Choose Your Favorite Text Editor Chances are you have your own personal favorite and want to make it the default source code viewer. To get started go to the “Tools Menu”  and click on “Developer Tools” or press “F12” to access the “Developer Tools Window”. Once you have the “Developer Tools Window” open go to the “File Menu”, then “Customize Internet Explorer View Source”, and click on “Other”. Once you have clicked on “Other” you will see the “Program Directory” for the current default app. Here you can see the “Program Files Folder” for “EditPad Lite”. To change the default app simply browse for the appropriate program folder. On our test system we decided to change the default to “Editra”. Once you have located the program that you want to use click on the “.exe” file for that app and click “Open”. Once you have clicked “Open”, all that is left for you to do is close the “Developer Tools Window”…everything else is already taken care of. And just like that you can be viewing source code with your favorite text editor. Conclusion If you have been unhappy with the default source code viewer in Internet Explorer 8 then you can set up access to your favorite text editor in just a couple of minutes. Nice, quick, and easy the way it ought to be. Thanks to HTG & TinyHacker reader Dwight for the tip! Similar Articles Productive Geek Tips View Webpage Source Code in Your Favorite Text Editor – FirefoxView Webpage Source Code in Tabs in FirefoxEasily View Source of Included Files in FirefoxRemove ISP Text or Corporate Branding from Internet Explorer Title BarRemove PartyPoker (Or Other Items) from the Internet Explorer Tools Menu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Automate Tasks in Linux with Crontab Discover New Bundled Feeds in Google Reader Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download

    Read the article

  • Set Custom Reload Times for Individual Webpages in Chrome

    - by Asian Angel
    Do you have a webpage that needs to be reloaded every so often or perhaps you have multiple webpages that each need their own individual reload time? Now you can have the best of both with the AutoReloader extension for Google Chrome. Using AutoReloader When you first look at the drop-down window everything will be in a neutral “waiting” state. You can start using the extension immediately by simply entering the desired “time frame” for reloading a webpage. Notice for the “Repeat Option” that “0 = Continuous”… You may want to have a quick look through the “Options” to see if there are any “operational changes” that you would like to make. Once you enter a time click on the “Set Link” to start the timer. Notice that you can view the time remaining on the “Toolbar Button” unless you disabled the feature in the “Options”. Clicking on the “Toolbar Button” will show a larger version of the timer in the drop-down window along with a “Cancel Current Timer Link”. Here is the best part of all with AutoReloader…you can set up your own customized list of “Reload Times” and then access them through the drop-down window. Using the two times shown here we were able to set the “Productive Geek Webpage” up for 30 second reloads and the “TinyHacker Webpage” up for 1 minute reloads at the same time. There was no conflict whatsoever in running both “reload times” simultaneously. This is a really terrific feature! Conclusion Whether you have only one webpage or multiple pages that need periodic reloading (such as tracking a Woot-Off or an Ebay auction) the AutoReloader extension is the perfect tool for the job. Running custom reload times simultaneously have never been easier. Links Download the AutoReloader extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Set Up Automatic Timed Page Reloading on Your Webpages in FirefoxRemove Custom about:config Entries the Easy WayEnable Vista Black Style Theme for Google Chrome in XPActivate the Redesigned New-Tab Interface in Google ChromeModify Tab Ordering in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional The Growth of Citibank Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier

    Read the article

  • View Clipboard & Copy To Clipboard from NetBeans IDE

    - by Geertjan
    Thanks to this code, I can press Ctrl-Alt-V in NetBeans IDE and then view whatever is in the clipboard: import java.awt.Toolkit; import java.awt.datatransfer.DataFlavor; import java.awt.datatransfer.Transferable; import java.awt.datatransfer.UnsupportedFlavorException; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.io.IOException; import javax.swing.JOptionPane; import org.openide.awt.ActionRegistration; import org.openide.awt.ActionReference; import org.openide.awt.ActionReferences; import org.openide.awt.ActionID; import org.openide.util.NbBundle.Messages; @ActionID( category = "Tools", id = "org.demo.ShowClipboardAction") @ActionRegistration( displayName = "#CTL_ShowClipboardAction") @ActionReferences({ @ActionReference(path = "Menu/Tools", position = 5), @ActionReference(path = "Shortcuts", name = "DA-V") }) @Messages("CTL_ShowClipboardAction=Show Clipboard") public final class ShowClipboardAction implements ActionListener { @Override public void actionPerformed(ActionEvent e) { JOptionPane.showMessageDialog(null, getClipboard(), "Clipboard Content", 1); } public String getClipboard() { String text = null; Transferable t = Toolkit.getDefaultToolkit().getSystemClipboard().getContents(null); try { if (t != null && t.isDataFlavorSupported(DataFlavor.stringFlavor)) { text = (String) t.getTransferData(DataFlavor.stringFlavor); } } catch (UnsupportedFlavorException e) { } catch (IOException e) { } return text; } } And now I can also press Ctrl-Alt-C, which copies the path to the current file to the clipboard: import java.awt.Toolkit; import java.awt.datatransfer.Clipboard; import java.awt.datatransfer.StringSelection; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionReferences; import org.openide.awt.ActionRegistration; import org.openide.awt.StatusDisplayer; import org.openide.loaders.DataObject; import org.openide.util.NbBundle.Messages; @ActionID( category = "Tools", id = "org.demo.CopyPathToClipboard") @ActionRegistration( displayName = "#CTL_CopyPathToClipboard") @ActionReferences({ @ActionReference(path = "Menu/Tools", position = 0), @ActionReference(path = "Editors/Popup", position = 10), @ActionReference(path = "Shortcuts", name = "DA-C") }) @Messages("CTL_CopyPathToClipboard=Copy Path to Clipboard") public final class CopyPathToClipboardAction implements ActionListener { private final DataObject context; public CopyPathToClipboardAction(DataObject context) { this.context = context; } @Override public void actionPerformed(ActionEvent e) { String path = context.getPrimaryFile().getPath(); StatusDisplayer.getDefault().setStatusText(path); StringSelection ss = new StringSelection(path); Clipboard clipboard = Toolkit.getDefaultToolkit().getSystemClipboard(); clipboard.setContents(ss, null); } }

    Read the article

  • Steps to deploying on Windows Azure

    - by Vincent Grondin
    Alright, these steps might be a little detailed and of few might not be necessary but still it's a pretty accurate road map to deploying on azure...     1)     Open you solution 2)      Rebuild ALL 3)      Right click on your Azure project and click "Publish" 4)      It should open a windows explorer window with your package to be uploaded (.cspkg ) and its associated configuration (.cscfg) to be uploaded too.  Keep it open, you'll need that path later on... 5)      It should also open a browser asking you to login to your passport account, please do so. 6)      After this you will be redirected to the Azure Portal where you will see your Azure Project Name below the « Projet Name » section.  Click on it. 7)      Then you should be redirected to a detailed view of your account on Azure where you will create a new service by clicking the hyperlink on the top right corner. 8)      Choose the right service type for you, most likely the "Hosted Service" type 9)      Choose a « Label » name and click « next » 10)   Choose a name for your service and validate that the name is available in the cloud by clicking the "Check Availability" button 11)   At the bottom of this same page, you can choose to create a group for your service, use no group or join an existing group.  Creating a group means that all applications that belong to the same group will see no cost to exchanging data between other applications of the same group.  Most of the time when you create a single application, creating a group is not necessary.  You should choose a region that's close to your own region. 12)   On the next window, you should see a "Production" environment and a "Staging" environment.  Beware because "Staging" and "Production" are two different environments in the cloud and applications in "Staging" even when not runing do continue to rack in charges...  Choose an environment and click "Deploy". 13)   In the following window, browse to the path where your cspkg resides and then do the same thing with your cscfg file.  Choose a name for your Label,  and click "Deploy"... 14)   From now on, the clock is ticking and unless you have free Azure hours, your credit card is being billed… 15)   Click on the « Run » button to start your application 16)   Be patient.... be very patient… 17)   Once your application has finished starting, you should see a GREEN circle on the left side of the screen indicating that your application is READY.  Click the URL to test your application and remember that if your application is a service, you have to hit the "svc" class behind the link you see there.  Something in the likes of http://testvince2.cloudapp.net/service1.svc  (this is a fictional link) 18)   Hopefully your application will show up or in the case of a service, you will see your service's wsdl meaning that everything is working fine. Happy cloud computing all!

    Read the article

  • BAM Data Control in multiple ADF Faces Components

    - by [email protected]
    As we know Oracle BAM data control instance sharing is not supported.When two or more ADF Faces components must display the same data, and are bound to the same Oracle BAM data control definition, we have to make sure that we wrap each ADF Faces component in an ADF task flow, and set the Data Control Scope to isolated. This blog will show a small sample to demonstrate this. In this sample we will create a Pie and Bar using same BAM DC, such that both components use same Data control but have isolated scope.This sample can be downloaded  fromSample1.zip Set-up: Create a BAM data control using employees DO (sample) Steps: Right click on View Controller project and select "New->ADF Task Flow" Check "Create Bounded Task Flow" and give some meaningful name (ex:EmpPieTF.xml ) to the TaskFlow(TF) and click on "OK"CreateTF.bmpFrom the "Components Palette", drag and drop "View" into the task flow diagram. Give a meaningful name to the view. Double Click and Click "Ok" for  "Create New JSF Page Fragment" From "Data Controls" drag and drop "Employees->Query"  into this jsff page as "Graph->Pie" (Pie: Sales_Number and Slices: Salesperson) Repeat step 1 through 4 for another Task Flow (ex: EmpBarTF). From "Data Controls" drag and drop "Employees->Query"  into this jsff page as "Graph->Bar" (Bars :Sales_Number and X-axis : Salesperson). Open the Taskflow created in step 2. In the Structure Pane, right click on "Task Flow Definition -EmpPieTF" Click "Insert inside Task Flow Definition - EmpPieTF -> ADF Task Flow -> Data Control Scope". Click "OK"TFDCScope.bmpFor the "Data Control Scope", In the Property Inspector ->General section, change data control scope from Shared to Isolated. Repeat step 8 through 11 for the 2nd Task flow created. Now create a new jspx page example: Main.jspxDrag and drop both the Task flows (ex: "EmpPieTF" and "EmpBarTF") as regions. Surround with panel components as needed.Run the page Main.jspxMainPage.bmpNow when the page runs although both components are created using same Data control the bindings are not shared and each component will have a separate instance of the data control.

    Read the article

  • Responding to Invites

    - by Daniel Moth
    Following up from my post about Sending Outlook Invites here is a shorter one on how to respond. Whatever your choice (ACCEPT, TENTATIVE, DECLINE), if the sender has not unchecked the "Request Response" option, then send your response. Always send your response. Even if you think the sender made a mistake in keeping it on, send your response. Seriously, not responding is plain rude. If you knew about the meeting, and you are happy investing your time in it, and the time and location work for you, and there is an implicit/explicit agenda, then ACCEPT and send it. If one or more of those things don't work for you then you have a few options. Send a DECLINE explaining why. Reply with email to ask for further details or for a change to be made. If you don’t receive a response to your email, send a DECLINE when you've waited enough. Send a TENTATIVE if you haven't made up your mind yet. Hint: if they really require you there, they'll respond asking "why tentative" and you have a discussion about it. When you deem appropriate, instead of the options above, you can also use the counter propose feature of Outlook but IMO that feature has questionable interaction model and UI (on both sender and recipient) so many people get confused by it. BTW, two of my outlook rules are relevant to invites. The first one auto-marks as read the ACCEPT responses if there is no comment in the body of the accept (I check later who has accepted and who hasn't via the "Tracking" button of the invite). I don’t have a rule for the DECLINE and TENTATIVE cause typically I follow up with folks that send those.   The second rule ensures that all Invites go to a specific folder. That is the first folder I see when I triage email. It is also the only folder which I have configured to show a count of all items inside it, rather than the unread count - when sending a response to an invite the item disappears from the folder and hence it is empty and not nagging me. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Best Platform/Engine for turn based Client/Server Android game

    - by Paradine
    I'm currently designing a turn based game for tablets. Initially for Android with porting to iOS later considered in design. I'm having trouble narrowing down the available technologies to even know where to spend my research time. I am hoping that if I explain what I am trying to achieve someone may be able to suggest a platform and/or engine. I've looked into some of the open source Engines ( http://www.cuteandroid.com/ten-open-source-android-2d-or-3d-game-engine-for-android-developers ) and some appear to handle much of what I might require - although with a higher focus on graphics than i need. Mages looks interesting although development appears to have ceased. If I could somehow leverage GoogleApps that would be excellent. Here is what I am trying to achieve: PvP turn based strategy game over internet - minimal animation and bandwidth required Players match up online using MetaGame system MatchID created on Resolution Server and Game starts Clients have 30 second countdown to select MoveString Clients sends small secure timestamped and MatchIDed MoveString to Resolution server Resolution server looks up Move String for each player, Resolves and Updates Players status in MatchID on Server Resolution server updates Client Views Repeat until victory conditions met - MatchID Closed, Rewards earned in MetaGame There will also need to be a full social and account system and metagame backend - but this could be running on separate system(s) Tablet in Offline mode would be catalog browsing and perhaps single player AI - bum I'm focusing on the Resolution Server at this point I'm not even certain if I would be looking at an Android App or a WebApp at this stage! I want a custom GUI so I guess an app - but maybe as I have little animation a WebApp might also work. Probably some combination of both. There will be very small overhead in data between client server - essentially a small text string every 30 seconds sent to the Resolution server which looks up the Effect and applies it to the Opponents string and determines some results to apply to the match. The client view is updated minimally with the results (only 5 in game Integers tracked) - perhaps triggering small animations/popups on the client to show the end result. e.g Explosion. If you have suggestions for a good technology or platform to best achieving the Resolution Server I'd love to hear. Also if you have experience with open source Engines - and could narrow down which (if any ) might be most suitable that would be a big help. Thanks in advance

    Read the article

  • European e-government Action Plan all about interoperability

    - by trond-arne.undheim
    Yesterday, the European Commission released its European eGovernment Action Plan for 2011-2015. The plan includes measures on providing deeper user empowerment, enhancing the Internal Market, more efficiency and effectiveness of public administrations, and putting in place pre-conditions for developing e-government. The Good - Defines interoperability very clearly. Calls interoperability "a pre-condition for cross-border eGovernment services" (a very strong formulation) and says interoperability "is supported by open specifications". - Uses the terminology "open specifications" which, let's face it, is pretty close to "open standards" which is the term the rest of the world would use. - Confirms that Member States are fully committed to the political priorities of the Malmö Declaration (which was all about open standards) including the very strong action: by 2013: All Member States will have incorporated the political priorities of the Malmö Declaration in their national strategies. Such tight Action Plan integration between Commission and Member State priorities has seldom been attempted before, particularly not in a field where European legal competence is virtually non-existent. What we see now, is the subtle force of soft power rather than the rough force of regulation. In this case, it is the Member States who want Europe to take the lead. Very refreshing! Some quotes that show the commitment to interoperability and open specifications: "The emergence of innovative technologies such as "service-oriented architectures" (SOA), or "clouds" of services,  together with more open specifications which allow for greater sharing, re-use and interoperability reinforce the ability of ICT to play a key role in this quest for effficiency in the public sector." (p.4) "Interoperability is supported through open specifications" (p.13) 2.4.1. Open Specifications and Interoperability (p.13 has a whole section dedicated to this important topic. Open specifications and interoperability are nearly 100% interrelated): "Interoperability is the ability of systems and machines to exchange, process and correctly interpret information. It is more than just a technical challenge, as it also involves legal, organisational and semantic aspects of handling  data" (p.13) "standards and  open platforms offer opportunities for more cost-effective use of resources and delivery of services" (p.13). The Bad Shies away from defining open standards, or even open specifications, the EU's preferred term for the key enabler of interoperability. Verdict 90/100, a very respectable score.

    Read the article

  • Can you Trust Search?

    - by David Dorf
    An awful lot of referrals to e-commerce sites come from web searches. Retailers rely on search engine optimization (SEO) to correctly position their website so they can be found. Search on "blue jeans" and the results are determined by a semi-secret algorithm -- in my case Levi.com, Banana Republic, and ShopStyle show up. The NY Times recently uncovered a situation where JCPenney, via third-parties hired to help with SEO, was caught manipulating search results so they were erroneously higher in page rankings. No doubt this helped drive additional sales during this part Christmas. The article, The Dirty Little Secrets of Search, is well worth reading. My friend Ron Kleinman started an interesting discussion at the ARTS Linkedin forum. He posed the question: The ability of a single company to "punish" any retailer (by significantly impacting their on-line sales volume) who does not play by their rules ... is this a good thing or a bad thing? Clearly JCP was in the wrong and needed to be punished, but should that decision lie with Google alone? Don't get me wrong -- I'm certainly not advocating we create a Department of Search where bureaucrats think of ways to spend money, but Google wields an awful lot of power in this situation, and it makes me feel uncomfortable. Now Google is incorporating more social aspects into their search results. For example, when Google knows its me (i.e. I'm logged in when using Google) search results will be influenced by my Twitter network. In an effort to increase relevance, the blogs and re-tweeted articles from my network will be higher in the search results than they otherwise would be. So in the case of product searches, things discussed in my network will rise to the top. Continuing my blue jean example, if someone in my network had been discussing Macy's perhaps they would now be higher in the result set. soapbox: I already have lots of spammers posting bogus comments to this blog in an effort to create additional links to their sites and thus increase their search ranking. Should I expect a similar situation in Twitter and eventually Facebook? Now retailers need to expand their SEO efforts to incorporate social media as well, but do us all a favor and please don't cheat.

    Read the article

  • Should we exclude code for the code coverage analysis?

    - by romaintaz
    I'm working on several applications, mainly legacy ones. Currently, their code coverage is quite low: generally between 10 and 50%. Since several weeks, we have recurrent discussions with the Bangalore teams (main part of the development is made offshore in India) regarding the exclusions of packages or classes for Cobertura (our code coverage tool, even if we are currently migrating to JaCoCo). Their point of view is the following: as they will not write any unit tests on some layers of the application (1), these layers should be simply excluded from the code coverage measure. In others words, they want to limit the code coverage measure to the code that is tested or should be tested. Also, when they work on unit test for a complex class, the benefits - purely in term of code coverage - will be unnoticed due in a large application. Reducing the scope of the code coverage will make this kind of effort more visible... The interest of this approach is that we will have a code coverage measure that indicates the current status of the part of the application we consider as testable. However, my point of view is that we are somehow faking the figures. This solution is an easy way to reach higher level of code coverage without any effort. Another point that bothers me is the following: if we show a coverage increase from one week to another, how can we tell if this good news is due to the good work of the developers, or simply due to new exclusions? In addition, we will not be able to know exactly what is considered in the code coverage measure. For example, if I have a 10,000 lines of code application with 40% of code coverage, I can deduct that 40% of my code base is tested (2). But what happen if we set exclusions? If the code coverage is now 60%, what can I deduct exactly? That 60% of my "important" code base is tested? How can I As far as I am concerned, I prefer to keep the "real" code coverage value, even if we can't be cheerful about it. In addition, thanks to Sonar, we can easily navigate in our code base and know, for any module / package / class, its own code coverage. But of course, the global code coverage will still be low. What is your opinion on that subject? How do you do on your projects? Thanks. (1) These layers are generally related to the UI / Java beans, etc. (2) I know that's not true. In fact, it only means that 40% of my code base

    Read the article

  • Algorithm for spreading labels in a visually appealing and intuitive way

    - by mac
    Short version Is there a design pattern for distributing vehicle labels in a non-overlapping fashion, placing them as close as possible to the vehicle they refer to? If not, is any of the method I suggest viable? How would you implement this yourself? Extended version In the game I'm writing I have a bird-eye vision of my airborne vehicles. I also have next to each of the vehicles a small label with key-data about the vehicle. This is an actual screenshot: Now, since the vehicles could be flying at different altitudes, their icons could overlap. However I would like to never have their labels overlapping (or a label from vehicle 'A' overlap the icon of vehicle 'B'). Currently, I can detect collisions between sprites and I simply push away the offending label in a direction opposite to the otherwise-overlapped sprite. This works in most situations, but when the airspace get crowded, the label can get pushed very far away from its vehicle, even if there was an alternate "smarter" alternative. For example I get: B - label A -----------label C - label where it would be better (= label closer to the vehicle) to get: B - label label - A C - label EDIT: It also has to be considered that beside the overlapping vehicles case, there might be other configurations in which vehicles'labels could overlap (the ASCII-art examples show for example three very close vehicles in which the label of A would overlap the icon of B and C). I have two ideas on how to improve the present situation, but before spending time implementing them, I thought to turn to the community for advice (after all it seems like a "common enough problem" that a design pattern for it could exist). For what it's worth, here's the two ideas I was thinking to: Slot-isation of label space In this scenario I would divide all the screen into "slots" for the labels. Then, each vehicle would always have its label placed in the closest empty one (empty = no other sprites at that location. Spiralling search From the location of the vehicle on the screen, I would try to place the label at increasing angles and then at increasing radiuses, until a non-overlapping location is found. Something down the line of: try 0°, 10px try 10°, 10px try 20°, 10px ... try 350°, 10px try 0°, 20px try 10°, 20px ...

    Read the article

  • Speed Up the Help Dialog in Windows and Office

    - by Matthew Guay
    When you click help, you don’t want to wait for your computer to bring it to you.  Here’s how you can speed up the help dialog in Windows and Office. If you have a slow internet connection, chances are you’ve been frustrated by the Help dialog in Windows and Office trying to download fresh content every time you open them. This can be great if the updated help files contain better content, but sometimes you just want to find what you were looking for without waiting.  Here’s how you can turn off the automatic online help. Use Local Help in Windows Windows 7 and Vista’s help dialog usually tries to load the latest content from the net, but this can take a long time on slow connections. If you’re seeing the above screen a lot, you may want to switch to offline help.  Click the “Online Help” button at the bottom, and select “Get offline Help”. Now your computer will just load the pre-installed help files.  And don’t worry; if there’s a major update to your help files, Windows will download and install it through Windows Update.   Stupid Geek Tip: An easy way to open Windows Help is to click on your desktop or Start Menu and press F1 on your keyboard. Use Local Help in Office This same trick works in Office 2007 and 2010.  We’ve actually had more problems with Office’s help being tardy. Solve this the same way as with Windows help.  Click on the “Connected to Office.com” or “Connected to Office Online” button, depending on your version of Office, and select “Show content only from this computer”. This will automatically change the settings for Help in all of your Office applications. While this may not be a major trick, it can be helpful especially if you have a slow internet connection and want to get things done quickly.  Similar Articles Productive Geek Tips How to See the About Dialog and Version Information in Office 2007Speed Up SATA Hard Drives in Windows VistaMake Mouse Navigation Faster in WindowsSpeed up Your Windows Vista Computer with ReadyBoostSet the Speed Dial as the Opera Startup Page TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos

    Read the article

  • Fun with Sun Ray, 3D, Oracle VM x86 and SRIOV

    - by wim.coekaerts
    One of the things I like about my job is that I get to play around with stuff and make use of the technologies we work on in my teams. Sort of my own little playground. It allows me to study the products in great detail and put them to use in ways that individual product teams don't always intend them to be used for :) but that makes it fun. I have a lot of this set up at home because... work is sort of hobby and I just like to tinker with it. Anyway, a few weeks ago I was looking at my sun ray rig at home and how well 3D works. Google Earth and some basic opengl tests like glxspheres combined with virtualgl. It resulted in some very cool demos recorded with my little camera (sorry for the crappy quality of the video :-) : OVDC (soft client) on my mac Sun Ray 2FS Never mind the hickups during zoom, that's because I was using the scrollwheel on my mouse and I can't scroll uninterrupted :) Anyway, this is quite cool ! The setup for this was the following : Sun Ray on LAN, Sun Ray Server 5 latest installed on OL5.5 inside a VM running on Oracle VM 2.2 (hardware virt, with a virtual network (vif)) and the virtualgl rendering happened on another box (wopr5) that runs linux on a little atom D520 with an ION2 gpu. So network goes from Sun Ray to Sun Ray Server to wopr5 and back. Given that this is full screen 3D it puts a good amount of load on the network and it's pretty cool that SRS was just a VM :) So, separately, I had written a little blog entry about using sriov and oracle vm a while back. link to sriov blog entry Last night when I came home I wanted to do some more playing around with SRIOV and live migrate. To do this, I wanted to set up a VM with 2 network interfaces, one virtual network (vif) and then one that's one of the SRIOV virtual functions from my network card. Inside the guest they show as eth0 and eth1, and then bond them using a standard linux bonding device (bond0 here) with active active links. The goal here is that on live migrate, we would detach the VF (eth1 in guest in this case), the bond would then just hum along on eth0 (vif) we can live migrate the VM and then on the other server after the migrate completes we re-attach a VF to the VM there and eth1 pops up again and the bond uses both eth0/eth1 to do its work. So, to set this up, I figured, why not use my sun ray server VM because the 3D work generates a nice network load and is very latency/timing sensitive. In the end, I ran glxspheres on my sunray server (vm) displaying on my sun ray 2 fs and while that was running, I did my live migrate test of this vm (unplug pci VF, migrate, reconnect vf) and guess what, it just kept running :) veryyyyyy cool. now, it was supposed to, but it's always nice to see it actually work, for real. Here's a diagram of it. No gimics - just real technology at work ! enjoy :)

    Read the article

  • Installing Canon LBP6000 in Ubuntu 12.04

    - by MMA
    This is really frustrating. I am trying to install LBP6000 in Ubuntu 12.04 without any success. (Well, I had success about a week back when I first bought the printer and finally printed pages after a struggle of several hours. Then today it suddenly stopped working and I uninstalled everything and started from scratch. Now, I seem to have lost the way.) My steps Downloaded the latest Canon driver from Canon site. File Linux_CAPT_PrinterDriver_V240_uk_EN.tar.gz Got the radu script (I am allowed only two hyperlinks, so can not put the link here. You can Google radu Canon) Changed the /etc/modprobe.d/blacklist-cups-usblp.conf file as instructed in, Official Documentation. (See the section Ubuntu 12.04 Install). Now this file looks like, # cups talks to the raw USB devices, so we need to blacklist usblp to avoid # grabbing them # blacklist usblp Rebooted my machine Changed the port in radu script to 59787 as instructed in the link at step 3. (Again see the section Ubuntu 12.04 Install, or see the comment at How to Install Canon LBP Printers in Ubuntu. Also put the latest deb files from step 1 in the appropriate directory of this script. Ran the radu script. A printer, LBP6000 got added. Not two printers, one to be disabled, as appeared in the message on the terminal after running the script. sudo /etc/init.d/ccpd status shows, Canon Printer Daemon for CUPS: ccpd: 3142 3139 Results The printer does not print. Printer state (from System Setting-Printing, or at cups http interface localhost:631/printers/LBP6000) goes from Idle to Processing, a job appears in print queue, and then the job disappears and the printer state goes back to Idle. The actual printer does not even blink. Diagnostics (got help from the link in step 3, Troubleshooting) captstatusui -P LBP6000 shows communication error lsmod | grep usblp did not show anything. After running, sudo modprobe usblp, shows usblp 17885 0 However, ls -l /dev/usb/lp0 gives, ls: cannot access /dev/usb/lp0: No such file or directory /var/ccpd did not exist, created, sudo mkdir /var/ccpd sudo mkfifo /var/ccpd/fifo0 sudo chown -R lp:lp /var/ccpd Any suggestion will be appreciated. Do not know what to do.

    Read the article

  • Best development architecture for a small team of programmers ( WAMP Stack )

    - by Tio
    Hi all.. I'm in the first month of work in a new company.. and after I met the two programmer's and asked how things are organized in terms of projects inside the company, they simply shrug their shoulders, and said that nothing is organized.. I think my jaw hit the ground that same time.. ( I know some, of you think I should quit, but I'm on a privileged position, I'm the most experienced there, so there's room for me to grow inside the company, and I'm taking the high road ).. So I talked to the IT guy, and one of the programmers, and maybe this week I'm going to get a server all to myself to start organizing things. I've used various architectures in my previous work experiences, on one I was developing in a server on the network ( no source control of course ).. another experience I had was developing in my local computer, with no server on the network, just source control. And at home, I have a mix of the two, everything I code is on a server on the network, and I have those folders under source control, and I also have a no-ip account configured on that server so I can access it everywhere and I can show the clients anything. For me I think this last solution ( the one I have at home ) is the best: Network server with WAMP stack. The server as a public IP so we can access it by domain name. And use subdomains for each project. Everybody works directly on the network server. I think the problem arises, when two or more people want to work on the same project, in this case the only way to do this is by using source control and local repositories, this is great, but I think this turns development a lot more complicated. In the example I gave, to make a change to the code, I would simply need to open the file in my favorite editor, make the change, alter the database, check in the changes into source control and presto all done. Using local repositories, I would have to get the latest version, run the scripts on the local database to update it, alter the file, alter the database, check in the changes to the network server, update the database on the network server, see if everything is running well on the network server, and presto all done, to me this seems overcomplicated for a change on a simple php page. I could share the database for the local development and for the network server, that sure would help. Maybe the best way to do this is just simply: Network server with WAMP stack ( test server so to speak ), public server accessible trough the web. LAMP stack on every developer computer ( minus the database ) We develop locally, test, then check in the changes into the server test and presto. What do you think? Maybe I should start doing this at home.. Thanks and best regards... Edit: I'm sorry I made a mistake and switched WAMP with LAMP, sorry about that..

    Read the article

  • System Variables, Stored Procedures or Functions for Meta Data

    - by BuckWoody
    Whenever you want to know something about SQL Server’s configuration, whether that’s the Instance itself or a database, you have a few options. If you want to know “dynamic” data, such as how much memory or CPU is consumed or what a particular query is doing, you should be using the Dynamic Management Views (DMVs) that you can read about here: http://msdn.microsoft.com/en-us/library/ms188754.aspx  But if you’re looking for how much memory is installed on the server, the version of the Instance, the drive letters of the backups and so on, you have other choices. The first of these are system variables. You access these with a SELECT statement, and they are useful when you need a discrete value for use, say in another query or to put into a table. You can read more about those here: http://msdn.microsoft.com/en-us/library/ms173823.aspx You also have a few stored procedures you can use. These often bring back a lot more data, pre-formatted for the screen. You access these with the EXECUTE syntax. It is a bit more difficult to take the data they return and get a single value or place the results in another table, but it is possible. You can read more about those here: http://msdn.microsoft.com/en-us/library/ms187961.aspx Yet another option is to use a system function, which you access with a SELECT statement, which also brings back a discrete value that you can use in a test or to place in another table. You can read about those here: http://msdn.microsoft.com/en-us/library/ms187812.aspx  By the way, many of these constructs simply query from tables in the master or msdb databases for the Instance or the system tables in a user database. You can get much of the information there as well, and there are even system views in each database to show you the meta-data dealing with structure – more on that here: http://msdn.microsoft.com/en-us/library/ms186778.aspx  Some of these choices are the only way to get at a certain piece of data. But others overlap – you can use one or the other, they both come back with the same data. So, like many Microsoft products, you have multiple ways to do the same thing. And that’s OK – just research what each is used for and how it’s intended to be used, and you’ll be able to select (pun intended) the right choice. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Detecting if someone is in a room in your house and sending you an email.

    - by mbcrump
    Let me setup this scenario: You are selling your house. You have small children. (Possibly 2 rug rats or more) The real estate company calls and says they have a showing for your house between the hours of 3pm-6pm. You have to keep the children occupied. You realize this is the 5th time you have shown your house this week. What is a programmer to do?……Setup a webcam, find a motion detection software that has support to launch a program and of course, Visual Studio 2010. First, comes the tools Some sort of webcam, I chose the WinBook because a friend of mine loaned it to me. It is a basic USB2.0 camera that supports 640x480 without software.  Next up was find webcam software that supports launching a program. WebcamXP support this. VS 2010 Console Application. A cell phone that you can check your email. You may be asking, why write code to send the email when a lot of commercial software motion detection packages include that as base functionality. Well, first it cost money and second I don’t want the picture of the person as that probably invades privacy and as a future buyer, I don’t want someone recording me in their house. Now onto the show... First, the code part. We are going to create a VS2010 or whatever version you have installed and use the following code snippet. Code Snippet using System; using System.Net.Mail; using System.Net;     namespace MotionDetectionEmailer {     class Program     {         static void Main(string[] args)         {             try             {                 MailMessage m = new MailMessage                    ("[email protected]",                     "[email protected]",                     "Motion Detected at " + DateTime.Now,                     "Someone is in the downstairs basement.");                 SmtpClient client = new SmtpClient("smtp.charter.net");                 client.Credentials = new NetworkCredential("mbcrump", "NOTTELLINGYOU");                 client.Send(m);             }               catch (SmtpException ex)             {                 Console.WriteLine("Who cares?? " + ex.ToString());             }         }       } } Second, Download and install wecamxp and select the option to launch an external program and you are finished. Now, when you are at MCDonalds and can check your email on your phone, you will see when they entered the house and you can go back home without waiting the full 3 hours. --- NICE!

    Read the article

  • Oracle ADF Essentials & ADF training material now on the iPad By Grant Ronald

    - by JuergenKress
    Faster and Simpler Java-based Application Development - Now Free Oracle ADF Essentials is an end-to-end Java EE framework that simplifies application development by providing out-of-the-box infrastructure services and a visual and declarative development experience. Oracle ADF Essentials is free to develop and deploy. Oracle ADF Essentials Overview Demo Tutorial - Using Oracle ADF Essentials with JPA/EJB and JSF Oracle ADF Essentials FAQ Introduction to Oracle ADF Seminar Tutorial - Developing with Oracle ADF Essentials ADF training material now on the iPad By Grant Ronald My team has developed about a weeks worth of ADF training material under the title ADF Insider and ADF Insider Essentials. This is available from our page on OTN. But we are now loading all our content on YouTube as well so the content can now be accessed on iPads. Over the next couple of weeks we'll also add these YouTube links to the OTN page but in the meantime, if you have an interest in ADF I strongly urge you to subscribe to our ADFInsiderEssentials YouTube Channel so you can be alerted when new content comes on line. Please also provide your comments, thumbs up/down, and let us know what content/topics is of your interest. GlassFish Extension for Oracle JDeveloper by Shay Shmeltzer We just release a new version of Oracle JDeveloper - 11.1.2.3. One new feature here is built-in support for GlassFish. This include the ability to create an "application server" connection to GlassFish and then deploy to that server with one click from inside JDeveloper. You can use this for deploying Oracle ADF Essentials application on Glassfish, but you can also use it to deploy any Java EE application you build in JDeveloper on GlassFish. However, if you are planning to work with GlassFish and JDeveloper on a more regular basis as your development server, then you might find my new extension useful. The new extension allows you to start and stop an external GlassFish instance, as well as start it in debug mode (which will allow JDeveloper to remotely debug your application as it runs on the server. I also added a button that will invoke the web admin console of Glassfish. Here is a quick demo that will show you how to work with the extension. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: adf training,adf,grant Ronald,adf essential,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Overview of getting and setting the URL and parts of the URL using angularjs and/or Javascript

    - by Sandy Good
    Getting and Setting the URL, and different parts of the URL are a basic part of Application Design. For Page Navigation Deep Linking Providing a link to the user Querying Data Passing information to other pages Both angularjs and javascript provide ways to get/set the URL and parts of the URL. I'm looking for the following information: Situation: Show a simple URL in the browser address bar to the user Provide a more detailed URL with string parameters to the page that the user will not see. In other words, two different URLs will be used, one simple one that the user sees in the browser, a more detailed one available to the page on load. Get URL info with PHP when then page intially loads, both don't reload the PHP page when the user needs more detailed info that is already loaded but not displayed yet. Set the URL with a more detailed URL for deep linking as the user drills down to more specific information. Get URL info in a controller or JavaSript when angularjs detects a change in the URL with routing. Hash or Query String or Both? Should I use a hash # in the URL, a string ?= or both? Here is what I currently know and what I want: A Query String HTTP:\\www.name.com?mykey=itemID will prevent angularjs from reloading the page. So, I can change the URL by adding/changing the string at the end, thereby providing new info to the page, and keep the page from reloading. I can change the URL and force a page reload with: window.location.href = "#Store/" + argUserPubId + "?itemID=home"; If home is the itemID string, I want code to simply load the page, and not display more detailed information. If there is a real itemID in the URL query string, I want the code to display the more detailed information. Code from angularjs will run either from the controller specified in the routing, or a controller specified in the HTML, or both. The angularjs code specified in the routing seems to run first, before the code specified in the HTML. A different URL for the page can be used in angularjs templateURL: than the URL that was sent to the browser address bar. when('/Store/:StoreId', { templateUrl: function(params){return 'Client_Pages/Stores.php?storeID=' + params.StoreId;}, controller: 'storeParseData' }). The above code detects http:\\www.name.com\Store\StoreID in the browser, but SENDS http:\\www.name.com\Client_Pages/Stores.php?storeID=StoreID to the page. In the above code, a function is used for the angularjs routing templateURL: to dynamically set the templateURL. So, when the user clicks something to see details of an item, how should I configure the URL? Should I use angularjs $location or window.location.href ? Should I use a longer URL with more parameters, a hash bang, or a query string? Should I use: http:\\www.name.com\Store\StoreID\ItemID or http:\\www.name.com\Store\StoreID#ItemID or http:\\www.name.com\Store\StoreID?ItemID or http:\\www.name.com\Store#StoreID?ItemID or Something else?

    Read the article

  • How to repair an external harddrive?

    - by dodohjk
    I would like to reformat my hard disk, and if possible recover the (somewhat unimportant) contents if possible. I have a Western Digital 1TB hard drive which had a NTFS partition. I unplugged the drive without safely removing it first. At first a pop up was asking me to use a Windows OS to run the chkdsk /f command, however, in the effort to keep using a Linux OS I used the ntfsfix command on the ubuntu terminal Now, when I try to access the hard drive, it doesn't show up anymore in Nautilus. I tried reformatting it using Disk Utility, but it gives me an error message, and Gparted would hang on the "Scanning devices" step infinitely. Please comment any output that you would like to see and I will add it to my question. EDIT disk utility tells me is on /dev/sdb the command sudo fdisk -l gives dodohjk@DodosPC:~$ sudo fdisk -l [sudo] password for dodohjk: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0006fa8c Device Boot Start End Blocks Id System /dev/sda1 * 4094 482344959 241170433 5 Extended /dev/sda2 482344960 488396799 3025920 82 Linux swap / Solaris /dev/sda5 4096 31461127 15728516 83 Linux /dev/sda6 31463424 52434943 10485760 83 Linux /dev/sda7 52436992 62923320 5243164+ 83 Linux /dev/sda8 62924800 482344959 209710080 83 Linux Disk /dev/sdb: 1000.2 GB, 1000202043392 bytes 255 heads, 63 sectors/track, 121600 cylinders, total 1953519616 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x6e697373 This doesn't look like a partition table Probably you selected the wrong device. Device Boot Start End Blocks Id System /dev/sdb1 ? 1936269394 3772285809 918008208 4f QNX4.x 3rd part /dev/sdb2 ? 1917848077 2462285169 272218546+ 73 Unknown /dev/sdb3 ? 1818575915 2362751050 272087568 2b Unknown /dev/sdb4 ? 2844524554 2844579527 27487 61 SpeedStor Partition table entries are not in disk order I wrote something wrong here, however here the output of fsck /dev/sbd is dodohjk@DodosPC:~$ sudo fsck /dev/sdb fsck from util-linux 2.20.1 e2fsck 1.42.5 (29-Jul-2012) ext2fs_open2: Bad magic number in super-block fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sdb The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device&gt;

    Read the article

< Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >