Search Results

Search found 10472 results on 419 pages for 'david hope ross'.

Page 29/419 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • LibreOffice: Open in current program by default?

    - by David Oneill
    I often need to open pipe delimited .txt files in LibreOffice Calc. However, once I have Calc running, if I do File Open and select a spreadsheet with the extension .txt, it opens it in Writer instead. Is there a way to tell the file I'm trying to open using whatever program instead of trying to pick which one to use? Barring that, is there a way I tell it to always use Calc for .txt files (when I open them from the open dialog in Calc)? I still want them to open in GEdit like they currently do if I double click them from Thunar.

    Read the article

  • Creating a Document Library with Content Type in code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/15/154360.aspxIn the past, I have shown how to create a list content type and add the content type to a list in code.  As a Developer, many of the artifacts which we create are widgets which have a List or Document Library as the back end.   We need to be able to create our applications (Web Part, etc.) without having the user involved except to enter the list item data.  Today, I will show you how to do the same with a document library.    A summary of what we will do is as follows:   1.   Create an Empty SharePoint Project in Visual Studio 2.   Add a Code Folder in the solution and Drag and Drop Utilities and Extensions Libraries to the solution 3.   Create a new Feature and add and event receiver  all the code will be in the event receiver 4.   Add the fields which will extend the built-in Document content type 5.   If the Content Type does not exist, Create it 6.   If the Document Library does not exist, Create it with the new Content Type inherited from the Document Content Type 7.   Delete the Document Content Type from the Library (as we have a new one which inherited from it) 8.   Add the fields which we want to be visible from the fields added to the new Content Type   Here we go:   Create an Empty SharePoint Project in Visual Studio      Add a Code Folder in the solution and Drag and Drop Utilities and Extensions Libraries to the solution       The Utilities and Extensions Library will be part of this project which I will provide a download link at the end of this post.  Drag and drop them into your project.  If Dragged and Dropped from windows explorer you will need to show all files and then include them in your project.  Change the Namespace to agree with your project.   Create a new Feature and add and event receiver  all the code will be in the event receiver.  Here We added a new Feature called “CreateDocLib”  and then right click to add an Event Receiver All of our code will be in this Event Receiver.  For this Demo I will only be using the Feature Activated Event.      From this point on we will be looking at code!    We are adding two constants for use columGroup (How we want SharePoint to Group them, usually Company Name) and ctName(ContentType Name)  using System; using System.Runtime.InteropServices; using System.Security.Permissions; using Microsoft.SharePoint; namespace CreateDocLib.Features.CreateDocLib { /// <summary> /// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade. /// </summary> /// <remarks> /// The GUID attached to this class may be used during packaging and should not be modified. /// </remarks> [Guid("56e6897c-97c4-41ac-bc5b-5cd2c04f2dd1")] public class CreateDocLibEventReceiver : SPFeatureReceiver { const string columnGroup = "DJ"; const string ctName = "DJDocLib"; } }     Here we are creating the Feature Activated event.   Adding the new fields (Site Columns) ,  Testing if the Content Type Exists, if not adding it.  Testing if the document Library exists, if not adding it.   #region DocLib public override void FeatureActivated(SPFeatureReceiverProperties properties) { using (SPWeb spWeb = properties.GetWeb() as SPWeb) { //add the fields addFields(spWeb); //add content type SPContentType testCT = spWeb.ContentTypes[ctName]; // we will not create the content type if it exists if (testCT == null) { //the content type does not exist add it addContentType(spWeb, ctName); } if ((spWeb.Lists.TryGetList("MyDocuments") == null)) { //create the list if it dosen't to exist CreateDocLib(spWeb); } } } #endregion The addFields method uses the utilities library to add site columns to the site. We can add as many fields within this method as we like. Here we are adding one for demonstration purposes. Icon as a Url type.  public void addFields(SPWeb spWeb) { Utilities.addField(spWeb, "Icon", SPFieldType.URL, false, columnGroup); }The addContentType method add the new Content Type to the site Content Types. We have already checked to see that it does not exist. In addition, here is where we add the linkages from our site columns previously created to our new Content Type   private static void addContentType(SPWeb spWeb, string name) { SPContentType myContentType = new SPContentType(spWeb.ContentTypes["Document"], spWeb.ContentTypes, name) { Group = columnGroup }; spWeb.ContentTypes.Add(myContentType); addContentTypeLinkages(spWeb, myContentType); myContentType.Update(); } Here we are adding just one linkage as we only have one additional field in our Content Type public static void addContentTypeLinkages(SPWeb spWeb, SPContentType ct) { Utilities.addContentTypeLink(spWeb, "Icon", ct); } Next we add the logic to create our new Document Library, which we have already checked to see if it exists.  We create the document library and turn on content types.  Add the new content type and then delete the old “Document” content types.   private void CreateDocLib(SPWeb web) { using (var site = new SPSite(web.Url)) { var web1 = site.RootWeb; var listId = web1.Lists.Add("MyDocuments", string.Empty, SPListTemplateType.DocumentLibrary); var lib = web1.Lists[listId] as SPDocumentLibrary; lib.ContentTypesEnabled = true; var docType = web.ContentTypes[ctName]; lib.ContentTypes.Add(docType); lib.ContentTypes.Delete(lib.ContentTypes["Document"].Id); lib.Update(); AddLibrarySettings(web1, lib); } }  Finally, we set some document library settings on our new document library with the AddLibrarySettings method. We then ensure that the new site column is visible when viewed in the browser.  private void AddLibrarySettings(SPWeb web, SPDocumentLibrary lib) { lib.OnQuickLaunch = true; lib.ForceCheckout = true; lib.EnableVersioning = true; lib.MajorVersionLimit = 5; lib.EnableMinorVersions = true; lib.MajorWithMinorVersionsLimit = 5; lib.Update(); var view = lib.DefaultView; view.ViewFields.Add("Icon"); view.Update(); } Okay, what's cool here: In a few lines of code, we have created site columns, A content Type, a document library. As a developer, I use this functionality all the time. For instance, I could now just add a web part to this same solutionwhich uses this document Library. I love SharePoint! Here is the complete solution: Create Document Library Code

    Read the article

  • how to remove update repo that always fails

    - by David M. Karr
    A week or so ago I tried to add a new package repo that supposedly had a package I wanted. Unfortunately, the information about it was out of date, and I found that it fails to connect to it each time. I'd like to just remove the new repo, but I'm not sure how to do that. For context, when I update, I get this: W:Failed to fetch http://ppa.launchpad.net/geod/ppa-geod/ubuntu/dists/precise/main/source/Sources 404 Not Found [IP: 135.214.42.30 8080] , W:Failed to fetch http://ppa.launchpad.net/geod/ppa-geod/ubuntu/dists/precise/main/binary-amd64/Packages 404 Not Found [IP: 135.214.42.30 8080] , W:Failed to fetch http://ppa.launchpad.net/geod/ppa-geod/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found [IP: 135.214.42.30 8080] , E:Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • How to access files on a drive from an older system, mounted in a new system?

    - by David Thomas
    I've recently built a new system, after a rather large physical injury was sustained by my previous system (a precarious balance, and gravity, were not a happy mix). Surprisingly the /home drive of that system appears to have more-or-less survived the trauma. However... I decided to use a fresh drive for / (and swap) partition(s), and another fresh drive for the new /home. Now that's working, I decided to install the old /home drive (that I had assumed until now would be entirely dead and without capacity for use) into the new system to recover the files and data (so far as is possible). At this point I've run into a snag: I have no idea how to go about this (with Windows it was relatively easy, the new drive would be the latest character of the alphabet, and go from there). With 'disk utility' (System - Administration - Disk Utitlity) I've worked out which drive it is (/dev/sda) but clicking on 'mount' produces an error: 1: helper failed with: mount: according to mtab, /dev/sdb1 is already mounted on / mount failed ...if it is mounted on / I can't see it. I'm also moderately confused by the disk (device /dev/sda) being referred to as /dev/sdb1. Any and all insights would be incredibly welcome (I've already voted for: Idea #9063: New internal hard drives default automount at Brainstorm). Edited in response to Roland's request for a screenshot of disk utility: Details (so far as I know them): 40GB disk is / and swap, 1.0 TB Samsung is /home 1.0 TB Hitachi is from the old system (and was the old /home drive). Output from sudo fdisk -l pasted below: Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000bef00 Device Boot Start End Blocks Id System /dev/sda1 1 121601 976760001 83 Linux Disk /dev/sdb: 40.0 GB, 40018599936 bytes 255 heads, 63 sectors/track, 4865 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00037652 Device Boot Start End Blocks Id System /dev/sdb1 * 1 4742 38084608 83 Linux /dev/sdb2 4742 4866 993281 5 Extended /dev/sdb5 4742 4866 993280 82 Linux swap / Solaris Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e8d46 Device Boot Start End Blocks Id System /dev/sdc1 1 121602 976760832 83 Linux

    Read the article

  • ubuntu box just redisplaying login screen after update

    - by David M. Karr
    My Ubuntu 12.04 box has been working fine. A recent update may have messed something up. I normally run remote windows on it, and I noticed that my windows were failing to start up. I then tried logging into it directly from the GUI console, and I'm seeing that after I press enter on the (valid) password, the page just redisplays. It's not a password error, as that would give me an inline error. I see some messages appear and disappear quickly between the login screen going away and then redisplaying, but they go away too quickly to read. I was able to run the non-gui login, and I did an update and upgrade, and then rebooted, but it's doing the same thing. I have a Samba connection from my Windows box, and that's still working. If it matters, here's my uname output (somewhat elided): Linux ... 3.2.0-26-generic #41-Ubuntu SMP Thu Jun 14 17:49:24 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux What can I do to troubleshoot this? Note that when I select "Guest Session", it lets me log in and displays the window manager. This seems significant to me. Does this mean that something specific to my login is causing it to fail? Note: If it matters, here's the output from /var/log/dmesg. The line about gdm seems interesting: [ 9.815883] Bluetooth: RFCOMM TTY layer initialized [ 9.815887] Bluetooth: RFCOMM socket layer initialized [ 9.815888] Bluetooth: RFCOMM ver 1.11 [ 9.879088] [PCSPP,TRISTATE] [ 9.879092] parport0: irq 7 detected [ 9.883935] type=1400 audit(1341871177.871:10): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=845 comm="apparmor_parser" [ 9.884365] type=1400 audit(1341871177.871:11): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/ntpd" pid=851 comm="apparmor_parser" [ 9.950397] e1000e 0000:00:19.0: irq 42 for MSI/MSI-X [ 9.961160] init: gdm main process (907) killed by TERM signal [ 9.966358] lp0: using parport0 (polling).

    Read the article

  • History of Mobile Technology

    - by David Dorf
    Over the last ten years, mobile phones have gone through several incremental technology leaps that have added capabilities that impact the retail industry.  I've listed the six major ones below, along with their long-lasting impact. 1. Location In the US, the FCC required mobile phones to implement E911 (emergency calls) by 2006, requiring the caller to be located to within 300 meters.  Back in 2000, GPS was opened up for civilian use, and by 2004 Qualcomm had figured out how to use GPS in mobile phones.  So mobile operators moved from cell tower triangulation to GPS, principally for E911.  But then lots of other uses became apparent, especially navigation.  The earliest mobile apps from retailers made it easy to find nearby stores, and companies are looking at ways to use WiFi triangulation inside stores. 2. Computer Vision In 1997 Philippe Kahn shared a photo of his newborn using a mobile phone thus launching the popularity of instant visual communications.  Over the years the quality of the cameras got better, reaching the point where barcodes could be read around 2008.  That's when Occipital came on the scene with their Red Laser application, which was eventually acquired by eBay.  This opened up the ability for consumers to easily price compare inside stores.  Other interesting apps included Tesco's Wine Finder and Amazon's Price Checker, both allowing products to be identified by picture. 3. Augmented Reality Once the mobile phone had GPS, a video camera, and compass functionality it was suddenly possible to overlay digital information on the screen in real-time.  Yelp, which was using GPS to find nearby merchants, created a backdoor called Monocle on the iPhone that showed nearby merchants overlayed on the video camera view.  Today AR apps are mostly used by retailers for marketing, like Moosejaw's app that undresses models in their catalog. 4. Geo-Fencing So if we're able to track the location of a mobile phone, why not use that context to offer timely information?  My first experience with geo-fencing came courtesy of North Face, the outdoor enthusiast store. When a mobile phone enters a predetermined area, like near a store, a text message is sent to phone with an offer or useful information.  Of course retailers can geo-fence their competitors as well and find out which customers are aren't so loyal. 5. Digital Wallet Mobile payments leverage different technologies such as NFC, QRCodes, bluetooth, and SMS to facilitate communication between the consumers's phone and the retailer's point-of-sale. The key here is the potential to consolidate loyalty cards, coupons, and bank cards into the mobile phone and enable faster checkout.  Nobody does this better than Starbucks today, but McDonald's and Duncan Donuts aren't far behind.  Google, Isis, Paypal, Square, and MCX are all vying for leadership in this area.  If NFC does finally take off, it will be leveraged by retailers in more places than just the POS. 6. Voice Response Mobile Phones have had the ability to interpret simple voice commands for a while, but Google and Amazon were the first to use voice to allow searches for products.  Allowing searches by text, barcode, and voice makes it easy to comparison shop in the aisles.  Walmart even uses voice to build shopping lists, and if the Siri API is even opened we could see lots more innovation in this area.

    Read the article

  • Comparing Isis, Google, and Paypal

    - by David Dorf
    Back in 2010 I was sure NFC would make great strides, but here we are two years later and NFC doesn't seem to be sticking. The obvious reason being the chicken-and-egg problem.  Retailers don't want to install the terminals until the phones support NFC, and vice-versa. So consumers continue to sit on the sidelines waiting for either side to blink and make the necessary investment.  In the meantime, EMV is looking for a way to sneak into the US with the help of the card brands. There are currently three major solutions that are battling in the marketplace.  All three know that replacing mag-stripe alone is not sufficient to move consumers.  Long-term it's the offers and loyalty programs combined with tendering that make NFC attractive. NFC solutions cross lots of barriers, so a strong partner system is required.  The solutions need to include the carriers, card brands, banks, handset manufacturers, POS terminals, and most of all lots of merchants.  Lots of coordination is necessary to make the solution seamless to the consumer. Google Wallet Google's problem has always been that only the Nexus phone has an NFC chip that supports their wallet.  There are a couple of additional phones out there now, but adoption is still slow.  They acquired Zavers a while back to incorporate digital coupons, but the the bulk of their users continue to be non-NFC.  They have taken an open approach by not specifying particular payment brands.  Google is piloting in San Francisco and New York, supporting both MasterCard PayPass and stored value. I suppose the other card brands may eventually follow.  There's no cost for consumers or merchants -- Google will make money via targeted ads. Isis Not long after Google announced its wallet, AT&T, Verizon, and T-Mobile announced a joint venture called Isis.  They are in the unique position of owning the SIM in the phones they issue.  At first it seemed Isis was a vehicle for the carriers to compete with the existing card brands, but Isis later switched to a generic wallet that supports the major card brands.  Isis reportedly charges issuers a $5 fee per customer per year.  Isis will pilot this summer in Salt Lake City and Austin. PayPal PayPal, the clear winner in the online payment space beyond traditional credit cards, is trying to move into physical stores.  After negotiations with Google to provide a wallet broke off, PayPal decided to avoid NFC altogether, at least for now, and focus on payments without any physical card or phone.  By avoiding NFC, consumers don't need an NFC-enabled phone and merchants don't need a new reader.  Consumers must enter their phone number and PIN in the merchant's existing device, or they can enter their PIN in the PayPal inStore app running on their phone, then show the merchant a unique barcode which authorizes payment. Paypal is free for consumers and charges a fee for merchants.  Its not clear, at least to me, how PayPal handles fraudulent transactions and whether the consumer is protected. The wildcard is, of course, Apple.  Their mobile technologies set the standard, so incorporating NFC chips would certainly accelerate adoption of many payment solutions.  Their announcement today of the iOS Passbook is a step in the right direction, but stops short of handling payments. For those retailers that have invested in modern terminals, it seems the best strategy is to support all the emerging solutions and let the consumers choose the winner.

    Read the article

  • XNA Moddable Game - Architecture Design and Reflection

    - by David K
    I've decided to embark on an XNA moddable game project of a simple rogue style. For all purposes of this question, I'm going to not be using a scripting engine, but rather allow modders to directly compile assemblies that are loaded by the game at run time. I know about the security problems this may raise. So in order to expose the moddable content, I have gone about creating a generic project in XNA called MyModel. This contains a number of interfaces that all inherit from IPlugin, such as IGameSystem, IRenderingSystem, IHud, IInputSystem etc. Then I've created another project called MyRogueModel. This references MyModel project, and holds interfaces such as IMonster, IPlayer, IDungeonGenerator, IInventorySystem. More rogue specific interfaces, but again, all interfaces in this project inherit from IPlugin. Then finally, I've created another project called MyRogueGame, that references both MyModel and MyRogueModel projects. This project will be the game that you run and play. Here I have put the actual implementation of the Monster, DungeonGenerator, InputSystem and RenderingSystem classes. This project will also scan the mods directory during run time and load any IPlugins it finds using reflection and override anything it finds from the default. For example if it finds a new implementation of the DungeonGenerator it will use that one instead. Now my question is, in order to get this far, I have effectively 2 projects that contain nothing but interfaces... which seems a little... strange ? For people to create mods for the game, I would give them both the MyModel and MyRogueModel assemblies in which they would reference. I'm not sure whether this is the right way to do it, but my reasoning goes as follows : If I write 1 input system, I can use it in any game I write. If I create 3 rogue like games, and a modder writes 1 rendering system, that modder could use the rendering system for all 3 games, because it all comes from the MyModel project. I come from a more web based C# role, so having empty interface projects doesn't seem wrong, its just something I haven't done before. Before I embark on something that might be crazy, I'd just like to know whether this is a foolish idea and whether there's a better (or established) design principle I should be following ?

    Read the article

  • Payments for Android through Checkout/AdSense

    - by David Cesarino
    To those that don't know, Android developers in some countries recently transitioned from AdSense to Checkout for Play Store payments. This is told to existing seller accounts: Q: What happens if I have funds in my AdSense account but am not eligible for a payout yet? A: AdSense accounts have minimum thresholds for payouts. If you’re not eligible for a payout through AdSense for [month of migration], the funds will be automatically transferred back to your Google Checkout account. Once you enter bank account information through your Checkout account and have accrued at least $100 USD, your first wire transfer will be issued during the next monthly payout cycle. However, AdSense is still holding my funds, and since Checkout already paid me directly, following the new directives, I'm afraid the funds will be held in AdSense forever (I used AdSense only for Play Store payments, as required). Obviously, this is no replacement for Google support (a crusade to reach them, but nevermind...), I'm just asking if someone experienced this problem during the transition and how it was fixed.

    Read the article

  • CAMeditor v1.9 &ndash; thoughts and reflections

    - by david.webber(at)oracle.com
    We recently published the latest iteration of the CAMeditor tool on Sourceforge.net including more enhancements to the NIEM capabilities. This release represented an incremental improvement over the prior version with mostly bug fixes and patches. We’re now working on the full v2.0 release which will feature substantial improvements and new features in practically all areas.  Most importantly we are improving the dictionary handling and providing the ability to visually design new exchange schema directly from dictionary sets of components. In addition we are doing some interim release work on 1.9.x with patches and enhancements particularly to support running on Ubuntu and non-Windows platforms. And we are also providing an Ant script based deployment for the CAMV validation engine so you can do unit testing of batches of templates and XML instance samples using command line scripts. More updates will be forthcoming as we make early release versions available for testing purposes.

    Read the article

  • Virtual screen size with libgdx and GLES 2

    - by David Saltares Márquez
    I've been trying to use a virtual screen size for my libgdx desktop-android game. I'd like to always use a 16:9 aspect ratio but with a virtual screen size so everything would adapt automatically depending on the device size. This post illustrates the process pretty well but my game crashes when camera.apply(Gdx.Gl10) is called. This is because I'm using GLES 2.0 (for not having to use multiple of 2 texture sizes). As stated in the OrthographicCamera doc, the apply method only works with GLES 1 and GLES 1.1. Is there another way of applying my GL transformation to the camera so I can use a virtual screen resolution? Having to resize everything manually it's a total pain. Thanks a lot.

    Read the article

  • How do I fix "malformed line" errors in sources.list?

    - by david
    I had problems with my movie player. The sound was fine but the picture seemed to hesitate or stick or pause every few seconds so I was looking for some help on how to do this. I tried to install a command in the terminal and got this message: E:Malformed line 68 in source list /etc/apt/sources.list (dist parse) Can someone tell me how to fix this problem. I cannot open the software center or the update manager. The only other option I can think of is to wipe everything out and do a clean install. Thank you.

    Read the article

  • How to fix corrupted desktop icons and fonts?

    - by David Harvey
    I love Linux, but am a real novice. I installed Ubuntu 12.04 LTS 32bit alongside Windows XP. The installation seems to work just fine except for the desktop. The icons & font becomes corrupted to the point that it looks like Chinese. After surfing with Mozilla Firefox for a long time, the same problem begins with the web pages. I want to be free of Windows, but must solve this problem first. Can you help? Thank you.

    Read the article

  • JavaOne - Java SE Embedded Booth - Pactron Java Programmable Automation Controller (JPAC)

    - by David Clack
    Hi All, So at the last JavaOne we talked about developing a Java powered Programmable Automation Controller (JPAC) with our partner Pactron in Santa Clara. We actually demoed it running first at the Embedded Show in Germany this March. JPAC is based on a Marvell 88F6282 Kirkwood ARM SOC, we partnered with Hilsher from just outside Frankfurt, Germany for the mini pci ProfiBus controllers, Revolution Robotics from Corvallis, Oregon wrote the Java SE Embedded for ARM to Hilscher Linux driver interface. Revolution Robotics also designed the HTML5 application that runs on a Marvell ARM tablet to actually send and receive commands via ProfiBus to a slave device. We will have the system running in our booth at JavaOne this year, come take a look. If you are registered at JavaOne you can come over to the Java Embedded @ JavaOne for $100 Come see us in booth 5605 See you there Dave

    Read the article

  • Mentioning a price for a service behind a free app in App Store

    - by David
    We are making a business service and have created an app to be used as a front-end. The app is free, but the service is not. Due to Apple taking 30% of in-app purchases, our service cannot be bought as an in-app purchase. My question is: Could Apple choose to throw out our app if we mention the price of the service in the description of our app in iTunes or in a help-text in the app? It seems unreasonable that this would cause problems, but the potential consequences to our business could be terrible so I want to make sure.

    Read the article

  • How to Make Objects Fall Faster in a Physics Simulation

    - by David Dimalanta
    I used the collision physics (i.e. Box2d, Physics Body Editor) and implemented onto the java code. I'm trying to make the fall speed higher according to the examples: It falls slower if light object (i.e. feather). It falls faster depending on the object (i.e. pebble, rock, car). I decided to double its falling speed for more excitement. I tried adding the mass but the speed of falling is constant instead of gaining more speed. check my code that something I put under input processor's touchUp() return method under same roof of the class that implements InputProcessor and Screen: @Override public boolean touchUp(int screenX, int screenY, int pointer, int button) { // TODO Touch Up Event if(is_Next_Fruit_Touched) { BodyEditorLoader Fruit_Loader = new BodyEditorLoader(Gdx.files.internal("Shape_Physics/Fruity Physics.json")); Fruit_BD.type = BodyType.DynamicBody; Fruit_BD.position.set(x, y); FixtureDef Fruit_FD = new FixtureDef(); // --> Allows you to make the object's physics. Fruit_FD.density = 1.0f; Fruit_FD.friction = 0.7f; Fruit_FD.restitution = 0.2f; MassData mass = new MassData(); mass.mass = 5f; Fruit_Body[n] = world.createBody(Fruit_BD); Fruit_Body[n].setActive(true); // --> Let your dragon fall. Fruit_Body[n].setMassData(mass); Fruit_Body[n].setGravityScale(1.0f); System.out.println("Eggs... " + n); Fruit_Loader.attachFixture(Fruit_Body[n], Body, Fruit_FD, Fruit_IMG.getWidth()); Fruit_Origin = Fruit_Loader.getOrigin(Body, Fruit_IMG.getWidth()).cpy(); is_Next_Fruit_Touched = false; up = y; Gdx.app.log("Initial Y-coordinate", "Y at " + up); //Once it's touched, the next fruit will set to drag. if(n < 50) { n++; }else{ System.exit(0); } } return true; } And take note, at show() method , the view size from the camera is at 720x1280: camera_1 = new OrthographicCamera(); camera_1.viewportHeight = 1280; camera_1.viewportWidth = 720; camera_1.position.set(camera_1.viewportWidth * 0.5f, camera_1.viewportHeight * 0.5f, 0f); camera_1.update(); I know it's a good idea to add weight to make the falling object falls faster once I released the finger from the touchUp() after I picked the object from the upper right of the screen but the speed remains either constant or slow. How can I solve this? Can you help?

    Read the article

  • Store HighRes photos in Database or as File?

    - by David
    I run a site which has a couple of million photos and gets over 1000 photos uploaded each day. Up to now, we haven't kept the original file that was uploaded to conserve on space. However, we are getting to a point that we are starting to see a need to have high-res original versions. I was wondering if its better to store these in the filesystem as an actual file or if its better to store them in a database (ie: mysql). The highres images would be rarely referenced but may be used when someone decides to download it or we decide to use it for rare processes like making a new set of thumbnails sizes/etc.

    Read the article

  • OWB - 11.2.0.4 Windows standalone client released

    - by David Allan
    The 11.2.0.4 release of OWB containing the 32 bit and 64 bit standalone Windows client is released today, I had previously blogged about the Linux standalone client here. Big thanks to Anil for spearheading that, another milestone on the Data Integration roadmap. Below are the patch numbers; 17743124 - OWB 11.2.0.4 STANDALONE CLIENT FOR Windows 64 BIT 17743119 - OWB 11.2.0.4 STANDALONE CLIENT FOR Windows 32 BIT This is the terminal release of OWB and customer bugs will be resolved on top of this release. We are excited to share information on the Oracle Data Integration 12c release in our upcoming launch video webcast on November 12th.

    Read the article

  • Infrastructure to effectively set up experiements and learn from them

    - by David
    Open-org.com is in the early stages of creating our first product, a place on the web, where one can ask lawyers questions at a fraction of their normal costs. An early stage front page can be found here. I got inspired by this video, which is recommended by Jeff Atwood, which talks about getting feedback faster, which is the reason for this question. The problem Needless to say, we want our conversion rates to be as high as possible. Therefore, we want to be able to rapidly set up a new experiment where we change something on the site (like moving an image slightly, rewriting a sentence etc.). We then want to present the modified page to a random subset of the users. After that we will compare the conversion rates of the experiment with another version. I could very well imagine that we want to run 10-100 experiments simultaneously and it would be nice to have features, where experiments that obviously have worse results will be ended before schedule. My question Does infrastructure to support the whole process exist? A short description of our infrastructure... We use EC2 and PHP and have a script to automatically start up new instances with all needed software. Still, starting up a new server for every experiment, seems like a bit of overkill, so I am wondering what other options exist. Btw. If you feel like working for Open-org.com, you can pick a task, and start working, or suggest a new task. All profits are given out to the contributors.

    Read the article

  • My Graduate Experience at Oracle by Mayuri Khinvasara

    - by david.talamelli
    My experience at Oracle. I still vividly remember the day, when my name was announced in the campus hiring list of Oracle at my college. I was proud of myself but at the same time, I was getting goose bumps!!! A new world had arrived before me and the anxiousness of whether I could survive it or not had gripped me. Nervous about moving into an unknown city, I came to visit Hyderabad with my father. One look at the Oracle Campus and I felt some kind of magnetism pulling me towards it. And then, I joined Oracle in June 2009, with a lot of apprehensions in my mind. The HR Rep made us really comfortable in the first week itself. I met so many new people, managers, HR folks and most importantly 20 other Campus Hires like me. Then we had our team bonding sessions, team parties etc. I didn’t realize when the transition from campus to corporate happened. And I had started loving it. The confidence the HR Reps gave us and the bonding our managers imbibed in us, made us all ready for the new life ahead. Then started the rigorous training sessions, the excitement about our new work, new cubicles, new desktops, our first business cards, our first conference call and so on. I made new friends which were now my extended family, the freedom and courage of living alone. I was enjoying all that. As I was getting totally immersed into my regular work schedule I started getting to know the innumerable Oracle products, their functionalities, implementations and realizing the brand that Oracle is. Work pressure started increasing and so did the challenges to understand and deliver. I Didn’t realize how days and soon months passed by. Then came a golden chance to visit the Oracle Headquarters in US for 45 days training in November 2009. Once again, the excitement was enormous about the counter team-mates in HQ, the trainings ahead, the US work culture and my stay there. I felt so privileged for the company I was working. Boarding an international flight for the first time and visiting famous US cities which I had just seen in movies, was now a reality. It was a totally amazing experience. Work pressure kept me really busy, with learning new things every day, the immense satisfaction of delivering something, the nightmares of debugging a mistake, only to realize how silly it was.  I was enjoying the process. Soon a year passed by. I had transformed into this corporate software professional, I couldn’t believe I could be. Today, I complete 1 year and 8 months at Oracle and continue to look forward to the enriching experience I will have here. Truly one of the Top Companies in the World. Mayuri Khinvasara

    Read the article

  • Validating User Stories: How much change is too much?

    - by David Kaczynski
    While the core of requirements development and acceptance criteria would ideally take place during the planning meeting in order to create a better estimate, Scrum encourages continuous interaction with the product owner throughout the sprint to validate and refine user stories. What kind of criteria is used to judge if there is too much change being imposed on a user story mid-sprint? When is it appropriate to change the requirements of the user story? When is it appropriate to cancel the user story / sprint in order to re-evaluate and re-estimate a user story in question?

    Read the article

  • Firefox FUD not lagging

    <b>Netstat -vat:</b> "Can Firefox's innovation and growth curve continue? In a comment attributed to former Firefox developer Blake Ross, apparently not."

    Read the article

  • How can test users access an unpublished iOS app?

    - by David
    I am considering outsourcing the development of an iOS app to various independent developers. I will have have various testers of the app. We all work for separate companies. Some of these testers will be customers, who I would like feedback from. As there are multiple developers involved I expect there to be a new release on a daily basis. How can this be done? Would each of the testers need to buy some sort of license to avoid having to go through the app approval process? Is there any smooth way to do this so that it will not be a hassle for our friendly customers, who are willing to test our app?

    Read the article

  • JavaOne - Java SE Embedded Booth - Servergy Micro Server

    - by David Clack
    Hi All,  So it's been awhile, I've been working with all the ARM and Power Architecture partners we have now on testing Java SE Embedded. We will have a Java SE Embedded for ARM and PPC at Java One next week, I'll be bringing in some of the great ARM and PPC systems to demonstrate.  The first system I'd like to tell you about is a really cool 8 core Power Architecture Micro Server from a company in Dallas called Servergy. Java One will be it's first public outing, Bill Mapp the CEO will be doing a talk at the Java Embedded @ JavaOne conference in the Hotel Nikko, right next door to the JavaOne show in the Hilton. To read more about Servergy https://www.linux.com/news/enterprise/cloud-computing/641488-linux-based-servergy-advances-data-center-efficiency http://www.servergy.com/ If you are registered at JavaOne you can come over to the Java Embedded @ JavaOne for $100 Come see us in booth 5605 See you there Dave

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >