Search Results

Search found 44742 results on 1790 pages for 'after create'.

Page 279/1790 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • Efficiency concerning thread granularity

    - by MaelmDev
    Lately, I've been thinking of ways to use multithreading to improve the speed of different parts of a game engine. What confuses me is the appropriate granularity of threads, especially when dealing with single-instruction-multiple-data (SIMD) tasks. Let's use line-of-sight detection as an example. Each AI actor must be able to detect objects of interest around them and mark them. There are three basic ways to go about this with multithreading: Don't use threading at all. Create a thread for each actor. Create a thread for each actor-object combination. Option 1 is obviously going to be the least efficient method. However, choosing between the next two options is more difficult. Only using one thread per actor is still running through every object in series instead of in parallel. However, are CPU's able to create and join threads in the granularity posed in Option 3 efficiently? It seems like that many calls to the OS could be really slow, and varying enormously between different hardware.

    Read the article

  • Desktop Fun: Snow Covered Trees Wallpaper Collection

    - by Asian Angel
    Trees can become beautiful works of natural art when snow accumulates on them and make you feel as if you have stepped into another world when walking through them. So grab your jacket, gloves, and snowboots for a journey through this frosty scenery with our Snow Covered Trees Wallpaper Collection. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution. For more wallpapers be certain to see our great collections in the Desktop Fun section. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Arctic Theme for Windows 7 Gives Your Desktop an Icy Touch Install LibreOffice via PPA and Receive Auto-Updates in Ubuntu Creative Portraits Peek Inside the Guts of Modern Electronics Scenic Winter Lane Wallpaper to Create a Relaxing Mood Access Your Web Apps Directly Using the Context Menu in Chrome The Deep – Awesome Use of Metal Objects as Deep Sea Creatures [Video]

    Read the article

  • Tip: Recording Non-Maximized Applications in UPK

    - by Marc Santosusso
    Have you ever wanted to record an application that would not maximize, or an application that would look strange maximized? Or perhaps your Windows Desktop has become cluttered with icons and you don't want to capture the clutter in your recordings. Here's a tip that will help: create a background for your recording. Create a blank HTML file with a black background in your favorite HTML editor. Or download this sample file: UPK_Recording_Background.html (right click to save). If you would prefer a different color background in the sample file, open it in Notepad and change “#000” to a different HTML color. Open UPK_Recording_Background.html in its own web browser window. Press F11 to make the web browser window full screen. This should give you a completely black screen. (This works great in modern versions of the most popular browsers. I successfully used Firefox 15, Chrome 22, and IE 9. Open or switch to the desired application so that it sits on top of the full screen browser window. If the application you are recording is also in a browser, it is important that it be in a separate browser window from the UPK_Recording_Background.html. Record your topic normally. The above steps create a recording background using an HTML file and a web browser. This is just one method, for instance you could do the same thing with an image editor and an image viewer with a full screen view. Now you can record a non-maximized application without a distracting background. I hope you find this to be a helpful tip. Let us know what you think in the comments.

    Read the article

  • Creating Parent-Child Relationships in SSRS

    - by Tim Murphy
    As I have been working on SQL Server Reporting Services reports the last couple of weeks I ran into a scenario where I needed to present a parent-child data layout.  It is rare that I have seen a report that was a simple tabular or matrix format and this report continued that trend.  I found that the processes for developing complex SSRS reports aren’t as commonly described as I would have thought.  Below I will layout the process that I went through to create a solution. I started with a List control which will contain the layout of the master (parent) information.  This allows for a main repeating report part.  The dataset for this report should include the data elements needed to be passed to the subreport as parameters.  As you can see the layout is simply text boxes that are bound to the dataset. The next step is to set a row group on the List row.  When the dialog appears select the field that you wish to group your report by.  A good example in this case would be the employee name or ID. Create a second report which becomes the subreport.  The example below has a matrix control.  Create the report as you would any parameter driven document by parameterizing the dataset. Add the subreport to the main report inside the row of the List control.  This can be accomplished by either dragging the report from the solution explorer or inserting a Subreport control and then setting the report name property. The last step is to set the parameters on the subreport.  In this case the subreport has EmpId and ReportYear as parameters.  While some of the documentation on this states that the dialog will automatically detect the child parameters, but this has not been my experience.  You must make sure that the names match exactly.  Tie the name of the parameter to either a field in the dataset or a parameter of the parent report. del.icio.us Tags: SQL Server Reporting Services,SSRS,SQL Server,Subreports

    Read the article

  • Rendering a big game universe - bitmaps or vector graphics?

    - by user1641923
    I am new to an Android development, though I have much experience with Java, C++, PHP programming and a bit experience with vector graphics too (basic 3d Studio Max, Flash, etc). I am starting to work on an Android game. It is going to be a 2D space shooter/RPG, and I am not going to use any game engines and any 3D party libs. I really want to create a very large game universe, or even pseudo-infinite (without visible borders, as if it were a 2D projection of a sphere). It should include 10-12 clusters of 7-8 planets/other space objects and random amount of single asteroids/comets, which player can interact with and also not interactive background. I am looking for a least complicated aproach to create such a universe. My current ideas are: Simply create bitmaps with space scenery background so that they can be tiled seamlessly repeated and construct my 2D universe of this tiles, then place interactive objects (planets, other spaceships) on it. Using vector graphics. I would have a solid color background, some random background objects and gradients here and there. My problems here: Lack of knowledge of how well vector graphics is integrated in Android. Performance? Memory usage? Does Android manage big bitmaps well? Do all of the bitmaps have to be in memory during all game process? I am interested in technical details regarding each of the ideas and a suggestion, which I should go with.

    Read the article

  • Ways to dynamically render a real world 3d environment in Unity3D

    - by Jake M
    Using Unity3D and C# I am attempting to display a 3d version of a real world location. Inside my Unity3D app, the user will specify the GPS coordinates of a location, then my app will have to generate a 3d plane(anything doesn't have to be a plane) of that location. The plane will show a 500 metre by 500 metre 3d snapshot of that location. How would you suggest I achieve this in Unity3D? What methodology would you use to achieve this? NOTE: I understand that this is a very difficult endevour(to render real world locations dynamically in Unity3d) so I expect to perform many actions to achieve this. I just don't know of all the technologies out there and which would be best for my needs For example: Suggested methodology 1: Prompt user to specify GPS coords Use Google earth API and HTTP to programmatically obtain a .khm file describing that location(Not sure if google earth provides that capability does it?) Unzip the .khm so I have the .dae file Convert that file to a .3ds file using ??? third party converter(is there a converter that exists?) Import .3ds into Unity3D at runtime as a plane(is this possible)? Suggested methodology 2: Prompt user to specify GPS coords Use Google earth API and HTTP to programmatically obtain a .khm file describing that location(Not sure if google earth provides that capability does it?) Unzip the .khm so I have the .dae file Parse .dae file using my own C# parser I will write(do you think its possible to write a .dae parser that can parse the .dae into an array of Vector3 that describe the height map of that location?) Dynamically create a plane in Unity3D and populate it with my array/list of Vector3 points(is it possible to create a plane this way?) Maybe I am meant to create a mesh instead of a plane? Can you think of any other ways I could render a real world 3d environment in Unity3D?

    Read the article

  • Benchmarking ORM associations

    - by barerd
    I am trying to benchmark two cases of self referential many to many as described in datamapper associations. Both cases consist of an Item clss, which may require many other items. In both cases, I required the ruby benchmark library and source file, created two items and benchmarked require/unrequie functions as below: Benchmark.bmbm do |x| x.report("require:") { item_1.require_item item_2, 10 } x.report("unrequire:") { item_1.unrequire_item item_2 } end To be clear, both functions are datamapper add/modify functions like: componentMaps.create :component_id => item.id, :quantity => quantity componentMaps.all(:component_id => item.id).destroy! and links_to_components.create :component_id => item.id, :quantity => quantity links_to_components.all(:component_id => item.id).destroy! The results are variable and in the range of 0.018001 to 0.022001 for require function in both cases, and 0.006 to 0.01 for unrequire function in both cases. This made me suspicious about the correctness of my test method. Edit I went ahead and compared a "get by primary key case" to a "finding first matching record case" by: (1..10000).each do |i| Item.create :name => "item_#{i}" end Benchmark.bmbm do |x| x.report("Get") { item = Item.get 9712 } x.report("First") { item = Item.first :name => "item_9712" } end where the results were very different like 0 sec compared to 0.0312, as expected. This suggests that the benchmarking works. I wonder whether I benchmarked the two types of associations correctly, and whether a difference between 0.018 and 0.022 sec significant?

    Read the article

  • Ditch cPanel / WHM in favour of manual seup

    - by BWRic
    We currently use cPanel / WHM on a reseller account but are looking at getting a dedicated server. My first thought was to duplicate this set up on the dedicated box to allow us to quickly create new accounts. I'll be a managed server so they'll have set up the LAMP stack. I'm curious if I actually need cPanel and WHM. We don't use many of the features from cPanel / WHM, just creating accounts and databases, clients do not have FTP access. I'm no sys admin and come from a Windows / GUI background but have some knowledge in setting up development servers. WHM: Creating accounts I presume this sets up the Apache virtual host, FTP access and DNS settings. I've some knowledge of editing the Apache files to create virtual hosts. Am I correct in thinking as long as the DNS is pointing to the server IP and the virtual host is configured the server can serve the (php) pages? I'm not sure I need per site FTP access as only we will have access so I could have a server wide/htdocs only access to view all the site. The company who supply the dedicated hosts would also provide the own DNS management tool so I'm not need to cPanel one. MySQL: Creating users and databases We use cPanel to create the MySQL users and databases. As it's a dedicated box and I can have root access I think this could be replaced by SQLyog for db management and phpMyAdmin for user management. Do you I need cPanel or can I get by editing a few text files for creating the accounts, then use the MySQL tools for databases? Or am I missing something major with how the sites are configured?

    Read the article

  • How does Wikipedia's SEO work?

    - by Josh Siegl
    i'm sorry if this question is misplaced or doesn't belong here. I'm currently developing an app for android and IOS and of course i'm thinking about the best ways to market it. Last night I Google'd somebody else's app and the third link in was a Wikipedia page on it, I never even thought of apps having Wikipedia pages, but alas there it was. And of course it was very helpful in determining exactly what the app did and in what cases it was useful for (something that's absolutely crucial for potential customers to understand). So then I got to thinking that I should create a Wiki for my app, but how does Wikipedia apply SEO? I know that the question could be overly complicated or specific, i'm just looking for general answers. For instance when somebody Google's my app, where does Wikipedia display on the results? When I create a Wiki for my app, how do I ensure that the Wikipedia page shows in the search results (is there any way to do that? ) I'm sure i'll find all of this out later when I create a Wiki for my app, I guess i'm just asking this out of curiosity. So how does Wikipedia's search engine optimization work? (on a page by page basis)

    Read the article

  • Samba new file ownership, permissions configuration

    - by Martin Melka
    I have recently installed Samba on my server. Now I have a question about permissions and how to set it up. Currently I mount the Samba shared drive to my laptop with this line in /etc/fstab: //<host>/share /mnt/melka-server-data/ cifs username=<usrname> password=<passwd> _netdev 0 0 This works, as I can read from the files and create them (as root). The problem is when I want to create files as a regular user. I always get a Permission Denied error. These are ll outputs of the mounted folder: magicmaster@magicmaster-kubuntu:/mnt$ ll total 8 drwxr-xr-x 3 root root 4096 lis 11 14:15 ./ drwxr-xr-x 26 root root 4096 ríj 26 11:01 ../ drwxrwxrwx 8 magicmaster magicmaster 0 lis 12 22:12 melka-server-data/ and the inside: magicmaster@magicmaster-kubuntu:/mnt/melka-server-data$ ll total 4 drwxrwxrwx 8 magicmaster magicmaster 0 lis 12 22:12 ./ drwxr-xr-x 3 root root 4096 lis 11 14:15 ../ drwxrwxrwx 5 magicmaster magicmaster 0 lis 12 09:35 downloads/ drwxrwxrwx 2 magicmaster magicmaster 0 ríj 28 12:57 lost+found/ drwxrwxrwx 15 magicmaster magicmaster 0 lis 12 09:45 movies/ drwxrwxrwx 2 magicmaster magicmaster 0 lis 1 21:15 newest/ drwxrwxrwx 3 magicmaster magicmaster 0 lis 2 23:14 photos/ drwxrwxrwx 2 magicmaster magicmaster 0 ríj 30 12:44 software/ -rw-r--r-- 1 nobody nogroup 0 lis 12 22:12 zdar I called sudo chown -R magicmaster:magicmaster melka-server-data/ to try and change all the files to belong to me. Then the file zdar was created by magicmaster just by calling touch. I got the Permission Denied, but it was still created, though it belongs to nobody and I can't write into it. When I create a file as root, it still belongs to nobody, but at least I can write into it. What am I missing? I didn't notice anything in Samba config that would be related to this and I don't like the idea of having to log on as root in order to copy files.. Thanks

    Read the article

  • Initializing entities vs having a constructor parameter

    - by Vee
    I'm working on a turn-based tile-based puzzle game, and to create new entities, I use this code: Field.CreateEntity(10, 5, Factory.Player()); This creates a new Player at [10; 5]. I'm using a factory-like class to create entities via composition. This is what the CreateEntity method looks like: public void CreateEntity(int mX, int mY, Entity mEntity) { mEntity.Field = this; TileManager.AddEntity(mEntity, true); GetTile(mX, mY).AddEntity(mEntity); mEntity.Initialize(); InvokeOnEntityCreated(mEntity); } Since many of the components (and also logic) of the entities require to know what the tile they're in is, or what the field they belong to is, I need to have mEntity.Initialize(); to know when the entity knows its own field and tile. The Initialize(); method contains a call to an event handler, so that I can do stuff like this in the factory class: result.OnInitialize += () => result.AddTags(TDLibConstants.GroundWalkableTag, TDLibConstants.TrapdoorTag); result.OnInitialize += () => result.AddComponents(new RenderComponent(), new ElementComponent(), new DirectionComponent()); This works so far, but it is not elegant and it's very open to bugs. I'm also using the same idea with components: they have a parameterless constructor, and when you call the AddComponent(mComponent); method in an entity, it is the entity's job to set the component's entity to itself. The alternative would be having a Field, int, int parameters in the factory class, to do stuff like: new Entity(Field, 10, 5); But I also don't like the fact that I have to create new entities like this. I would prefer creating entities via the Field object itself. How can I make entity/component creation more elegant and less prone to bugs?

    Read the article

  • Strange and erratic transformations when using OpenGL VBOs to render scene

    - by janoside
    I have an existing iOS game with fairly simple scenes (all textured quads) and I'm using Apple's "Texture2D" class. I'm trying to convert this class to use VBOs since the vertices of my objects basically never change so I may as well not re-create them for every object every frame. I have the scene rendering using VBOs but the sizes and orientations of all rendered objects are strange and erratic - though locations seem generally correct. I've been toying with this code for a few days now, and I've found something odd: if I re-create all of my VBOs each frame, everything looks correct, even though I'm almost certain my vertices are not changing. Other notes I'm basing my work on this tutorial, and therefore am also using "IBOs" I create my buffers before rendering begins My buffers include vertex and texture data I'm using OpenGL ES 1.1 Fearing some strange effect of the current matrix GL state at the time of buffer creation I've also tried wrapping my buffer-setup code in a "pushMatrix-loadIdentity-popMatrix" block which (as expected) had no effect I'm aware that various articles have been published demonstrating that VBOs may not help performance, but I want to understand this problem and at least have the option to use them. I realize this is a shot in the dark, but has anyone else experienced this type of strange behavior? What might I be doing to result in this behavior? It's rather difficult for me to isolate the problem since I'm working in an existing, moderately complex project, so suggestions about how to approach the problem are also quite welcome.

    Read the article

  • Basic AppFabric Service Bus Programming Lifecycle

    - by kaleidoscope
    The tasks required to create an application that access the AppFabric Service Bus are as follows: Create a service namespace. This service namespace contains the resources used by the AppFabric Service Bus to support the application. Define the AppFabric Service Bus contract. A contract specifies the signature of the service, the data it exchanges, and other required inputs, behavior specifications, and object invariants. Implement the contract. To implement a service contract, create a class that implements the interface and specify custom runtime behaviors. Configure the service by specifying endpoint and other behavior information. Build and run the service. Build and run the client application. As with any iterative, service-oriented software development, it may not always be appropriate to follow the preceding steps sequentially, or even start from step 1. For example, if you want to build a client for a pre-existing service, you start at step 5. Or, if you are building a host service that others will use, you can skip step 6. Source: http://msdn.microsoft.com/en-us/library/ee173580.aspx   Sarang, K

    Read the article

  • How to make and restore incremental snapshots of hard disk

    - by brunopereira81
    I use Virtual Box a lot for distro / applications testing purposes. One of the features I simply love about it is virtual machines snapshots, its saves a state of a virtual machine and is able to restore it to its former glory if something you did went wrong without any problems and without consuming your all hard disk space. On my live systems I know how to create a 1:1 image of the file system but all the solutions I'v known will create a new image of the complete file system. Are there any programs / file systems that are capable of taking a snapshot of a current file system, save it on another location but instead of making a complete new image it creates incremental backups? To easy describe what I want, it should be as dd images of a file system, but instead of only a full backup it would also create incremental. I am not looking for clonezilla, etc. It should run within the system itself with no (or almost none) intervention from the user, but contain all the data of the file systems. I am also not looking for a duplicity backup your all system excluding some folders script + dd to save your mbr. I can do that myself, looking for extra finesse. I'm looking for something I can do before doing massive changes to a system and then if something when wrong or I burned my hard disk after spilling coffee on it I can just boot from a liveCD and restore a working snapshot to a hard disk. It does not need to be daily, it doesn't even need a schedule. Just run once in a while and let it its job and preferably RAW based not file copy based.

    Read the article

  • How To Check If a Transaction Related to Oracle Asset Tracking Has Been Accounted in SLA

    - by LuciaC-Oracle
    In Oracle Asset Tracking (OAT), we often see situations where a pending transaction has failed to be processed by the OAT programs. Typical situations can be: a pending transaction errors with "Unable to derive accounts from sub ledger accounting for the material transaction" a transaction is not picked by OAT programs. The Create Accounting program log file will show error messages and possible corrective actions to solve the error.  But as this is usually a scheduled program, often any errors that are reported are missed by users. To aid OAT users to identify if a transaction has failed and accounting has not been created, we have now created a SQL script which can be run for any pending transaction: How To Check If a Transaction Related to Oracle Asset Tracking Has Been Accounted in SLA ? (Doc ID 1673414.1)Using the script in this note, the user can pass the material transaction ID for the related transaction and the script will check if SLA accounting entries have been created for this specific transaction or not.If the SLA accounting entries have not been created, the script will prompt the user to run Create Accounting program.  After Create Accounting has been run, the user can run the script again to confirm that accounting has been created.

    Read the article

  • Making Multilingual J! 1.5 + + Joomfish + VM 1.17 more workable

    - by rhand
    I have been working with a multilingual Joomla! 1.5.23 e-commerce website for a client for quite a while and made several customizations. But the client is still not happy he has to adjust content at at least three locations: Joomfish Virtuemart Article Manager Joomfish is nice in the way that it allows you to create multilingual content and copy and paste the source language on the same page, which makes translation work easier but it is annoying in the way you have to edit several custom fields at different locations/ content types. As Joomla! source language content still needs to be created in the article manager first this is the second location the client has to work at. The third location is Virtuemart. Here all the products and product categories are created. And here we added some custom fields as well. Now I was considering upgrading the website to Joomla 1.7 or later on to 1.8. This J! versions have better multilingual support. But I wonder if er can really make the client's life easier. We will still have to copy the source language to a new article and create content in another language. We will still have the issue of content in custom fields that needs to be translated and we will still have to create content. Should I go for another CMS such as Magento or do you think there is a way in a more recent Joomla! version to work with all content in one or max two locations?

    Read the article

  • Oracle Identity Manager ADF Customization

    - by Arda Eralp
    This blog entry includes an example about customization Oracle Identity Manager (OIM) Self Service screen. Before customization all users that can be logged in OIM Self Service can see "Administration" tab on left menu. On this example we create "Managers" role and only users that have managers role can see "Administration" tab. Step 1: Create "Manager" role  Step 2: Create Sandbox  Step 3: Customize ADF Select "Customize" on the top menu Select "Source" instead of "Design" on top  Select "Administration" tab with blue rectangle and edit component Edit "visible" with expression builder #{oimcontext.currentUser.roles['Manager'] != null} Apply Step 4: Apply to All and Publish sandbox Notes:  This table objects can use for expression. Objects Description #{oimcontext.currentUser['ATTRIBUTE_NAME']} #{oimcontext.currentUser['UDF_NAME']} #{oimcontext.currentUser.roles} #{oimcontext.currentUser.roles['SYSTEM ADMINISTRATORS'] != null} Boolean #{oimcontext.currentUser.adminRoles['OrclOIMSystemAdministrator'] != null} Boolean

    Read the article

  • Feature Updates to the Windows Azure Portal

    - by Clint Edmonson
    Lots of activity over at the Windows Azure portal this weekend, including some exciting new features and major improvements to existing features. Here are the highlights: Support for Managing Co-administrators Set up account co-administrators to allow others to share service management duties for each Azure subscription Import/Export support for SQL Databases Export existing SQL Azure databases to blob storage using SQL Server 2012’s BACPAC format. Create a new SQL Azure database from an existing BACPAC stored in blob storage Storage Container Management and Access Control Create blob storage containers directly within the portal Edit their public/private access settings Drill into storage containers and see the blobs contained within them Improved Cloud Service Status Notifications Detailed health status information about cloud services and roles as they transition between states Virtual Machine Experience Enhancements Option to automatically delete corresponding VHD files from blob storage when deleting VM disks Service Bus Management and Monitoring Ability to create and manage service bus Namespaces, Queues, Topics, Relays and Subscriptions Rich monitoring of Topics, Queues, and Subscriptions with detailed and customizable dashboard metrics Entity status (Topic, Queue, or Subscription) can be changed interactively via dashboard Direct links to the Access Control Services (ACS) namespaces when working with service bus access keys Media Services Monitoring Support Monitor encoding jobs that are queued for processing as well as active, failed and queued tasks for encoding jobs The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today. Stay tuned to my twitter feed for Windows Azure announcements, updates, and links: @clinted Reference ID: P7VVJCM38V8R

    Read the article

  • SOA Composite Sensors : Good Practice

    - by angelo.santagata
    I was discussing a interesting design problem with a colleague of mine Niall (his blog) on the topic of how to cancel an inflight SOA Composite process.  Obviously one way to do this is to cancel the process from enterprise Manager ( http://hostort/em ) , however we were thinking this isnt a “user friendly” way of doing this.. If you look at Nialls blog you’ll see he’s highlighted a number of different APIs which enable you the ability to manipulate the SCA instance, e.g. Code Snippet to purge (delete) an instance How to determine the instanceId from a composite_sensor_value using the “composite_sensor_value” table How to determine a BPEL Process status using the cube_instance table   Now all of these require that you know the instanceId of your SOA Composite, how does one find this out? Well the easiest way of doing this is to create a composite sensor on the SCA component. A composite sensor is simply a way of publishing a piece of business data as part of your composite. The magic here is that you can later query composites based on this value. So a good best practice is that for any composites you create consider publishing a composite sensor value using a primary key of some sort , e.g. orderId, that way if you need to manipulate/query composites you can easily look up the instanceId using the sensorid.   For information on how to create a composite Sensor id see this documentation link  

    Read the article

  • Can't run minecraft on ubuntu 12.04 lts [duplicate]

    - by user170011
    This question already has an answer here: How to correctly install and troubleshoot Minecraft (Client) 3 answers I was trying to run minecraft on my laptop with ubuntu 12.04 lts 64 bit. I have a lenovo ideapad p580 with 7.7 Gb and an Intel® Core™ i7-3520M CPU @ 2.90GHz × 4 processor. Under the graphics section of the system overview in ubuntu it says I have none installed. My computer comes with and nvidia geforce graphics card but it isnt recognized. When I start minecraft I get this crash report. ---- Minecraft Crash Report ---- // Shall we play a game? Time: 24/06/13 7:23 PM Description: Failed to start game org.lwjgl.LWJGLException: Could not init GLX at org.lwjgl.opengl.LinuxDisplayPeerInfo.initDefaultPeerInfo(Native Method) at org.lwjgl.opengl.LinuxDisplayPeerInfo.<init>(LinuxDisplayPeerInfo.java:52) at org.lwjgl.opengl.LinuxDisplay.createPeerInfo(LinuxDisplay.java:684) at org.lwjgl.opengl.Display.create(Display.java:854) at org.lwjgl.opengl.Display.create(Display.java:784) at org.lwjgl.opengl.Display.create(Display.java:765) at net.minecraft.client.Minecraft.a(SourceFile:235) at avv.a(SourceFile:56) at net.minecraft.client.Minecraft.run(SourceFile:507) at java.lang.Thread.run(Thread.java:679) A detailed walkthrough of the error, its code path and all known details is as follows: -- System Details -- Details: Minecraft Version: 1.5.2 Operating System: Linux (amd64) version 3.5.0-34-generic Java Version: 1.6.0_27, Sun Microsystems Inc. Java VM Version: OpenJDK 64-Bit Server VM (mixed mode), Sun Microsystems Inc. Memory: 406175448 bytes (387 MB) / 514523136 bytes (490 MB) up to 1908932608 bytes (1820 MB) JVM Flags: 2 total; -Xmx2048M -Xms512M AABB Pool Size: 0 (0 bytes; 0 MB) allocated, 0 (0 bytes; 0 MB) used Suspicious classes: No suspicious classes found. IntCache: cache: 0, tcache: 0, allocated: 0, tallocated: 0 LWJGL: 2.4.2 OpenGL: ~~ERROR~~ NullPointerException: null Is Modded: Probably not. Jar signature remains and client brand is untouched. Type: Client (map_client.txt) Texture Pack: Default Profiler Position: N/A (disabled) Vec3 Pool Size: ~~ERROR~~ NullPointerException: null I can run it on different versions of linux such as fedora.

    Read the article

  • Ubuntu 13.04 alongside Windows 8 - How to partition from Windows

    - by mengelkoch
    I plan to install Ubuntu 13.04 alongside Windows 8, and I'm looking for a CLEAR answer on how to conduct partitioning appropriately. I'm very new to all of this so a thorough explanation with minimal jargon would be great. I have an Acer Aspire M5 x64 with 6G RAM. I think I already figured out how to deal with the fast startup, UEFI and SecureBoot issues (I disabled fast startup and disabled Secure Boot). I am able to boot into Ubuntu from a LiveUSB, and I think I am ready to install Ubuntu. Note - despite some advice found here, I do have to disable SecureBoot to boot 13.04 from my LiveUSB. From what I have read here, it seems that I should (at least at first) create the partitions from WITHIN Windows 8, not from the LiveUSB, to avoid reported problems. I have run compmgmt.msc and I see the existing partitions. I see the following: Disk 0: 400 MB Recovery; 300 MB EFI System; Acer (C:) 444.95 GB (Boot, Page File, Crash Dump, Primary Partition); 20 GB Recovery Disk 1: 3.74 GB Primary Partition; 14.90 GB Primary Partition I gather I need to create a mounting point '/' Partition (??), a swap partition, and a home partition. Please explain what these are, how big they should be, how I create them from Windows Disk Management, and anything else I need to know. Eventually, I plan to fully replace Windows 8 with Ubuntu, but for now I want to run alongside Windows 8 and not screw things up. I don't have any critical files saved on this computer yet. Thanks.

    Read the article

  • Database Migration Scripts: Getting from place A to place B

    - by Phil Factor
    We’ll be looking at a typical database ‘migration’ script which uses an unusual technique to migrate existing ‘de-normalised’ data into a more correct form. So, the book-distribution business that uses the PUBS database has gradually grown organically, and has slipped into ‘de-normalisation’ habits. What’s this? A new column with a list of tags or ‘types’ assigned to books. Because books aren’t really in just one category, someone has ‘cured’ the mismatch between the database and the business requirements. This is fine, but it is now proving difficult for their new website that allows searches by tags. Any request for history book really has to look in the entire list of associated tags rather than the ‘Type’ field that only keeps the primary tag. We have other problems. The TypleList column has duplicates in there which will be affecting the reporting, and there is the danger of mis-spellings getting there. The reporting system can’t be persuaded to do reports based on the tags and the Database developers are complaining about the unCoddly things going on in their database. In your version of PUBS, this extra column doesn’t exist, so we’ve added it and put in 10,000 titles using SQL Data Generator. /* So how do we refactor this database? firstly, we create a table of all the tags. */IF  OBJECT_ID('TagName') IS NULL OR OBJECT_ID('TagTitle') IS NULL  BEGIN  CREATE TABLE  TagName (TagName_ID INT IDENTITY(1,1) PRIMARY KEY ,     Tag VARCHAR(20) NOT NULL UNIQUE)  /* ...and we insert into it all the tags from the list (remembering to take out any leading spaces */  INSERT INTO TagName (Tag)     SELECT DISTINCT LTRIM(x.y.value('.', 'Varchar(80)')) AS [Tag]     FROM     (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )  /* we can then use this table to provide a table that relates tags to articles */  CREATE TABLE TagTitle   (TagTitle_ID INT IDENTITY(1, 1),   [title_id] [dbo].[tid] NOT NULL REFERENCES titles (Title_ID),   TagName_ID INT NOT NULL REFERENCES TagName (Tagname_ID)   CONSTRAINT [PK_TagTitle]       PRIMARY KEY CLUSTERED ([title_id] ASC, TagName_ID)       ON [PRIMARY])        CREATE NONCLUSTERED INDEX idxTagName_ID  ON  TagTitle (TagName_ID)  INCLUDE (TagTitle_ID,title_id)        /* ...and it is easy to fill this with the tags for each title ... */        INSERT INTO TagTitle (Title_ID, TagName_ID)    SELECT DISTINCT Title_ID, TagName_ID      FROM        (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )    INNER JOIN TagName ON TagName.Tag=LTRIM(x.y.value('.', 'Varchar(80)'))    END    /* That's all there was to it. Now we can select all titles that have the military tag, just to try things out */SELECT Title FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE tagname.tag='Military'/* and see the top ten most popular tags for titles */SELECT Tag, COUNT(*) FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  GROUP BY Tag ORDER BY COUNT(*) DESC/* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID So we’ve refactored our PUBS database without pain. We’ve even put in a check to prevent it being re-run once the new tables are created. Here is the diagram of the new tag relationship We’ve done both the DDL to create the tables and their associated components, and the DML to put the data in them. I could have also included the script to remove the de-normalised TypeList column, but I’d do a whole lot of tests first before doing that. Yes, I’ve left out the assertion tests too, which should check the edge cases and make sure the result is what you’d expect. One thing I can’t quite figure out is how to deal with an ordered list using this simple XML-based technique. We can ensure that, if we have to produce a list of tags, we can get the primary ‘type’ to be first in the list, but what if the entire order is significant? Thank goodness it isn’t in this case. If it were, we might have to revisit a string-splitter function that returns the ordinal position of each component in the sequence. You’ll see immediately that we can create a synchronisation script for deployment from a comparison tool such as SQL Compare, to change the schema (DDL). On the other hand, no tool could do the DML to stuff the data into the new table, since there is no way that any tool will be able to work out where the data should go. We used some pretty hairy code to deal with a slightly untypical problem. We would have to do this migration by hand, and it has to go into source control as a batch. If most of your database changes are to be deployed by an automated process, then there must be a way of over-riding this part of the data synchronisation process to do this part of the process taking the part of the script that fills the tables, Checking that the tables have not already been filled, and executing it as part of the transaction. Of course, you might prefer the approach I’ve taken with the script of creating the tables in the same batch as the data conversion process, and then using the presence of the tables to prevent the script from being re-run. The problem with scripting a refactoring change to a database is that it has to work both ways. If we install the new system and then have to rollback the changes, several books may have been added, or had their tags changed, in the meantime. Yes, you have to script any rollback! These have to be mercilessly tested, and put in source control just in case of the rollback of a deployment after it has been in place for any length of time. I’ve shown you how to do this with the part of the script .. /* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID …which would be turned into an UPDATE … FROM script. UPDATE titles SET  typelist= ThisTaglistFROM     (SELECT title_ID, title, STUFF(    (SELECT ','+tagname.tag FROM titles thisTitle      INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID      INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID    WHERE ThisTitle.title_id=titles.title_ID    ORDER BY CASE WHEN tagname.tag=titles.[type] THEN 1 ELSE 0  END DESC    FOR XML PATH(''), TYPE).value('.', 'varchar(max)')    ,1,1,'')  AS ThisTagList  FROM titles)fINNER JOIN Titles ON f.title_ID=Titles.title_ID You’ll notice that it isn’t quite a round trip because the tags are in a different order, though we’ve managed to make sure that the primary tag is the first one as originally. So, we’ve improved the database for the poor book distributors using PUBS. It is not a major deal but you’ve got to be prepared to provide a migration script that will go both forwards and backwards. Ideally, database refactoring scripts should be able to go from any version to any other. Schema synchronization scripts can do this pretty easily, but no data synchronisation scripts can deal with serious refactoring jobs without the developers being able to specify how to deal with cases like this.

    Read the article

  • Develop for Desktop and mobile use?

    - by ran2
    I am in the very beginning of developing an app / desktop program. I want it to be cross-platform and possibly also as a tablet version (preferably Android Icecream sandwich). Note that I need to run it offline. I thought about the following approaches: ADOBE Air, since I do not need much performance. Plus I did some web programming in the past which might be of some use. Afaik it would run on OS X and Windows and should run on mobile OSes, too. Qt. Found some nice Qt based desktop recently and read it also works on android. Plus I like the SDK. HTML5 / JS. Again my web background should help me here. I wont need no sever side scripts, thus it should work without installing anything but a browser. How easy could this be converted into an Android app? There might be a plethora of other (better) ways to do it, but I haven't thought of them yet. Can you help out? How would you create such an application. Would it be better to do some pure desktop client and then create tablet versions? Would you rather start to create a website and worry later on how to turn into an app?

    Read the article

  • What to expect when creating a style guide?

    - by ted.strauss
    My organization would like to create a full fledged style guide that will be applicable to internal & external web sites, print advertising, trade show design, and overall branding. This article lays out the scope we're aiming for, and has links to many great examples style guide PDFs. The goal is to create a style guide comparable to one of these. I'd like to set realistic expectations within my organization for creating this document. So I have a few of questions pertaining to this: We don't have design staff. Should we be looking for a design firm or freelancer to come in for a 2-6 month contract, or do we need a longer commitment? If we do go with a firm or freelancer, would the pay-scale be comparable to typical design work, or is a style guide a higher order of work? How long should it take a pro to create a style guide? To make estimates more concrete, let's say web only, including all custom graphics. Any red flags to watch out for? (Compare: a new coder who fails to use css properly would be a red flag.)

    Read the article

  • Releasing software/Using Continuous Integration - What do most companies seem to use?

    - by Sagar
    I've set up our continuous integration system, and it has been working for about a year now. We have finally reached a point where we want to do releases using the same. Before our CI system, the process(es) that was used was: (Develop) -> Ready for release -> Create a branch -> (Build -> Fix bugs as QA finds them) Loop -> Final build -> Tag (Develop) -> Ready for release -> (build -> fix bugs) Loop -> Tag Our CI setup: 1 server for development (DEV) 1 server for qa/release (QA) The second one has integrated into CI perfectly. I create a branch when the software is ready for release, and the branch never changes thereafter, which means the build is reproduceable without having to change the CI job. Any future development takes place on HEAD, and even maintainence releases get a completely new branch and a completely new job, which remains on the CI system forever, and then some. The first method is harder to adapt. If the branch changes, the build is not reproduceable unless I use the tag to build [jobs on the CI server uses the branch for QA/RELEASE, and HEAD for development builds]. However, if I use the tag to build, I have to create a new CI job to build from the tag (lose changelog on server), or change the existing job (lose original job configuration). I know this sounds complicated, and if required, I will rewrite/edit to explain the situation better. However, my question: [If at all] what process does your company use to release software using continuous integration systems. Is it even done using the CI system, or manually?

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >