Search Results

Search found 39497 results on 1580 pages for 'two shoes'.

Page 4/1580 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Four monitor setup with two Nvidia graphics cards

    - by user94329
    I'm running Ubuntu 12.04 and I have two nVidia quadro 2000 video cards, each with two monitors plugged in (all 4 monitors are identical). Now I have the latest nVidia drivers, and I'm trying to use the nVidia control panel to use all four monitors, and I can't get it to work. Currently, my configuration is using TwinView to have 2 monitors per X screen. This doesn't work well because either i turn xinerama on, and nothing appears on the screen when I start a X session with Compiz enabled. Things only work in Ubuntu 2D. i turn xinerama off, and compiz works, but now, I cant drag windows between the two screens and i have no idea how to start applications in the other screen. Is there a better way to configure my four monitor setup? Is there a way to get both GPUs onto a single X screen?

    Read the article

  • Installing ubuntu 12.04 LTS alongwith windows xp in two different HDDs

    - by chachu
    I have two HDDs. First HDD has four partitions with Windows XP in the first partition Second has two partitions and 15GB unallocated space. I tried to install Ubuntu 12.04 LTS in the second HDD by using Unallocated space. I tried for 6 times. Each time it got installed, but after first restart, it directly boots Windows XP in the first HDD and no boot options appear. Every time I found that the Ubuntu installation used unallocated space and made two partitions one EXT3 and other Linux swap. I don't know what went wrong. During Installation, Ubuntu detected Windows XP. Can anybody help me?

    Read the article

  • Run two shell file with thread

    - by user1149157
    How i can run two file shell in parallel and do not shared the same jvm. may be i use thread but how i run two file shell bu two thread ? File 1: #!/bin/bash # # Script for running several experimentations one the same JVM # Usage : TRACE_DIR NB_EXPE Factories... # param="parameter1" another="parameter2" for ((i = 10; i >= 0; i -= 1)) do echo "run my file with param another " done File 2 : #!/bin/bash # # Script for running several experimentations one the same JVM # Usage : TRACE_DIR NB_EXPE Factories... # a="101" b="400" c="500" echo "run my programme with a b c "

    Read the article

  • Bridging Two Worlds: Big Data and Enterprise Data

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The big data world is all the vogue in today’s IT conversations. It’s a world of volume, velocity, variety – tantalizing us with its untapped potential. It’s a world of transformational game-changing technologies that have already begun to alter the information management landscape. One of the reasons that big data is so compelling is that it’s a universal challenge that impacts every one of us. Whether it is healthcare, financial, manufacturing, government, retail - big data presents a pressing problem for many industries: how can so much information be processed so quickly to deliver the ‘bigger’ picture? With big data we’re tapping into new information that didn’t exist before: social data, weblogs, sensor data, complex content, and more. What also makes big data revolutionary is that it turns traditional information architecture on its head, putting into question commonly accepted notions of where and how data should be aggregated processed, analyzed, and stored. This is where Hadoop and NoSQL come in – new technologies which solve new problems for managing unstructured data. And now for some worst practices that I'd recommend that you please not follow: Worst Practice Lesson 1: Throw away everything that you already know about data management, data integration tools, and start completely over. One shouldn’t forget what’s already running in today’s IT. Today’s Business Analytics, Data Warehouses, Business Applications (ERP, CRM, SCM, HCM), and even many social, mobile, cloud applications still rely almost exclusively on structured data – or what we’d like to call enterprise data. This dilemma is what today’s IT leaders are up against: what are the best ways to bridge enterprise data with big data? And what are the best strategies for dealing with the complexities of these two unique worlds? Worst Practice Lesson 2: Throw away all of your existing business applications … because they don’t run on big data yet. Bridging the two worlds of big data and enterprise data means considering solutions that are complete, based on emerging Hadoop technologies (as well as traditional), and are poised for success through integrated design tools, integrated platforms that connect to your existing business applications, as well as and support real-time analytics. Leveraging these types of best practices translates to improved productivity, lowered TCO, IT optimization, and better business insights. Worst Practice Lesson 3: Separate out [and keep separate] your big data sandboxes from all the current enterprise IT systems. Don’t mix sand among playgrounds. We didn't tell you that you wouldn't get dirty doing this. Correlation between the two worlds is key. The real advantage to analyzing big data comes when you can correlate it with the existing data in your data warehouse or your current applications to make sense of the larger patterns. If you have not followed these worst practices 1-3 then you qualify for the first step of our journey: bridging the two worlds of enterprise data and big data. Over the next several weeks we’ll be discussing this topic along with several others around big data as it relates to data integration. We welcome you to join us in the conversation by following us on twitter on #BridgingBigData or download our latest white paper and resource kit: Big Data and Enterprise Data: Bridging Two Worlds.

    Read the article

  • Fastest PHP Routine To Match Words

    - by Volomike
    What is the fastest way in PHP to take a keyword list and match it to a search result (like an array of titles) for all words? For instance, if my keyword phrase is "great leather shoes", then the following titles would be a match... Get Some Really Great Leather Shoes Leather Shoes Are Great Great Day! Those Are Some Cool Leather Shoes! Shoes, Made of Leather, Can Be Great ...while these would not be a match: Leather Shoes on Sale Today! You'll Love These Leather Shoes Greatly Great Shoes Don't Come Cheap I imagine there's some trick with array functions or a RegEx (Regular Expression) to achieve this rapidly.

    Read the article

  • How can I have two web roots on one server, so I can point two domains to two different roots?

    - by Daniel F. Dietzel
    I usually ask my n00b questions on Stack Exchange, but now I'm setting up a VPS and have a question. My usual site.com points to my VPS's IP xxx.xxx.xxx.xxx. Now I have a different domain, and I'd like to point that to another location, say xxx.xxx.xxx.xxx/myname. However, when I try to add that as an "A" record in the DNS settings it won't let me. FYI I'm using chicagovps.net with ubuntu 11.10 and Nginx. Thanks!

    Read the article

  • What if &ldquo;Microsoft&rdquo; were in our shoes? About Windows Phone

    - by Vijaya Malla
    This is what I think about Microsoft Windows Phone. If Microsoft were in our shoes looking at various phones available their configurations, memory, front facing cameras etc. Microsoft disappointed the USA customer base again by not getting Nokia Lumia 800. The Past: If we talk few years ago, few business people were on their Blackberry’s and few Gadget lovers were on crappy Windows OS devices. The world was all going right till Apple came with a revolutionary device iPhone, which completely changed our perception towards phone and how great a smartphone can be. It’s not just phone but the whole technology industry. The romantic appealing of the phone and smooth touch and feel of it made everyone to get one of those bad boys. The sales went up for not just Apple for AT&T too. Even though everyone complained about the signal strength of AT&T, everyone wanted to be on it because they have iPhones. All world wanted iPhone back then except Microsoft with few comments on how it is not going to be in market. But it did great and rocked the industry. A few years later with iPhone and Android taking over the smartphone market Microsoft realized that it should be in the game too. Worked on the design of it, and gave us the best Mobile OS ever. Everyone thinks that iOS is a great OS for phones but if you have touched a Windows Phone and use it for real then you will realize the strengths of it. so last year we welcomed Windows Phone 7 The Present : Windows Phone 7 has the fastest growing market. The phones are cheap, you can buy from any carrier out there. The phone became smarter and smarter with the recent update “Mango (7.5)” and with the collaboration with Nokia, Microsoft created a new eco-system for smartphones with the best smartphone hardware and best smartphone software. Everyone in the world was excited about the collaboration. As we fly over cloud 9 imagining about Nokia made Windows Phones we all heard a good news from Nokia “Nokia World”. Nokia showed the world what a best hardware making company can do with Windows Phone 7.5 OS. Nokia Lumia 800 and 710 took the spotlight. Everyone here in USA and all over the world wanted to own a Nokia Lumia 800 because of the design, software, proprietary apps from Nokia (maps, ESPN, drive and music). If USA market had Nokia Lumia 800, then it would have been the best step Microsoft and Nokia had ever made in their history of smartphone market. With all the numbers going to Android and IPhone, its not clear on why Microsoft/Nokia did not release Lumia 800 here in USA. Its unclear if Microsoft had learnt the lesson or not. if it had learnt the lesson I guess Microsoft needs to get the Nokia Lumia 800 to the USA. The Future: This is where we hope we get the best form Microsoft. I was an iPhone user, I used 2G, 3G, 3GS, 4 and then moved to Windows Phone and never felt so happy with my iPhones’. From the day when Nokia announced the partnership with Microsoft and said that they going to come up with a new Nokia windows phone, I was dreaming for my Nokia Phone. but looks like it is not going to happen any time soon. My thoughts about the Market :  Nokia has the biggest market base in the world. Even though people moved to Android or iPhone over the years in other parts of the world like India and China, people still love to use Nokia. Everyone who uses a Windows Phone now will wait for that day when Nokia Lumia comes to the USA but what either or both of the companies should do for a better market share is to make a very aggressive move with the hardware and bet on the devices. I am pretty sure that it will work. everyone here in the USA will like to have a dual core windows phone with front facing camera and all other crazy things that android/apple phones offer. I think we just have to wait for that day and hope that day comes soon. Love Microsoft and Nokia Thank you for reading.

    Read the article

  • How to write Bash scribt to open two different terminals

    - by Ahmed Zain El Dein
    How to write Bash script to open two different taped terminal ,and write in both of them commands separately to be executed unrelationally for instance : Terminal number one open skype terminal number two open in the end , i want one more thing , can i write in the bash script my skype username and password to put them in skype when open it in terminal one automatically then login too Thanks

    Read the article

  • How to join two collections with LINQ

    - by JustinGreenwood
    Here is a simple and complete example of how to perform joins on two collections with LINQ. I wrote it for a friend to show him, in one simple file, the power of LINQ queries and anonymous objects. In the file below, there are two simple data classes defined: Person and Item. In the beginning of the main method, two collections are created. Note that the Item's OwnerId field reference the PersonId of a Person object. The effect of the LINQ query below is equivalent to a SQL statement looking like this: select Person.PersonName as OwnerName, Item.ItemName as OwnedItem from Person inner join Item on Item.OwnerId = Person.PersonId order by Item.ItemName desc; using System; using System.Collections.Generic; using System.Linq; namespace LinqJoinAnonymousObjects { class Program { class Person { public int PersonId { get; set; } public string PersonName { get; set; } } class Item { public string ItemName { get; set; } public int OwnerId { get; set; } } static void Main(string[] args) { // Create two collections: one of people, and another with their possessions. var people = new List<Person> { new Person { PersonId=1, PersonName="Justin" }, new Person { PersonId=2, PersonName="Arthur" }, new Person { PersonId=3, PersonName="Bob" } }; var items = new List<Item> { new Item { OwnerId=1, ItemName="Armor" }, new Item { OwnerId=1, ItemName="Book" }, new Item { OwnerId=2, ItemName="Chain Mail" }, new Item { OwnerId=2, ItemName="Excalibur" }, new Item { OwnerId=3, ItemName="Bubbles" }, new Item { OwnerId=3, ItemName="Gold" } }; // Create a new, anonymous composite result for person id=2. var compositeResult = from p in people join i in items on p.PersonId equals i.OwnerId where p.PersonId == 2 orderby i.ItemName descending select new { OwnerName = p.PersonName, OwnedItem = i.ItemName }; // The query doesn't evaluate until you iterate through the query or convert it to a list Console.WriteLine("[" + compositeResult.GetType().Name + "]"); // Convert to a list and loop through it. var compositeList = compositeResult.ToList(); Console.WriteLine("[" + compositeList.GetType().Name + "]"); foreach (var o in compositeList) { Console.WriteLine("\t[" + o.GetType().Name + "] " + o.OwnerName + " - " + o.OwnedItem); } Console.ReadKey(); } } } The output of the program is below: [WhereSelectEnumerableIterator`2] [List`1] [<>f__AnonymousType1`2] Arthur - Excalibur [<>f__AnonymousType1`2] Arthur - Chain Mail

    Read the article

  • What advantages do we have when creating a separate mapping table for two relational tables

    - by Pankaj Upadhyay
    In various open source CMS, I have noticed that there is a separate table for mapping two relational tables. Like for categories and products, there is a separate product_category_mapping table. This table just has a primary key and two foreign keys from the categories and product tables. My question is what are the benefits of this database design rather than just linking the tables directly by defining a foreign key in either table? Is it just matter of convenience?

    Read the article

  • Attaching two objects and changing their world matrices accordingly

    - by A-Type
    I'm having a hard time wrapping my head around the transformations required to bind two objects together in either a two-way or one-way relationship. I will need to implement both types. For the first case, I want to be able to 'couple' two ships together in space. The ships have different mass, of course. Forces applied to either ship will use combined mass and moment of inertia to calculate and move both ships. The trick is, being sure that the point at which they are coupled remains the same, and they don't move at all relative to each other. The second case is similar: I want a ship to be able to enter the atmosphere of a planet and move relative to the planet. The planet will be orbiting the sun, which is fixed at 0,0,0. Essentially, when the ship is sitting still outside of the atmosphere, the planet will move past it on its course-- but when the ship is sitting still inside the atmosphere, it moves and rotates with the planet, so that it is always relative to the horizon. Essentially, the vertices which make up the ship are now transformed just like the ones that make up the planet, except that the ship can move itself around relative to the planet. I get the feeling I can implement both of these with the same code. Essentially, I am thinking of giving each object (which I call Fixtures) a list of "slave" Fixtures onto which that Fixture's world matrix is imposed. So, this would be the planet imposing its world on any contained ships. In the case of coupling, I would simply make each ship a slave of the other, somehow. Obviously I can't just multiply the ship's world matrix by the planet's, or each ship by the others. What I'd like some help with is what calculations to make in order to get a nice, seamless relative world to the other object. I was thinking maybe I could just multiply the world of the slave by the inverse of the master, but then when you couple two ships you would lose all that world data. So, perhaps I need an intermediate "world" which is the absolute world, but use a secondary "final world" to actually transform the objects?

    Read the article

  • hpdv6 6080 two graphic card

    - by Taher
    My laptop has two graphic card one is ati radeon 6770m and other intel sandybridge. i install ati driver from repository but i can't select ati as my graphic card after installing ati ubuntu lose 3d mode how can i set graphic card? and switch between two graphic cards? or how can i set one of them? my laptop is hp dv6 6080 and i add blacklist radeon to /etc/modprobe.d/blacklist.conf and add below lines to /etc/rc.local file: modprobe radeon echo OFF /sys/kernel/debug/vgaswitcheroo/switch exit 0

    Read the article

  • Implementing a form of port knocking + Phone Factor = 2 Factor auth for RDP?

    - by jshin47
    I have been looking into how to secure a publicly-available RDP endpoint and want to implement our two-factor authentication RADIUS server, PhoneFactor. I would like to implement the following process: User opens up web app in browser In web app, user enters username + password, initiates RADIUS auth Phone factor calls user to complete auth Once user is authenticated, port 3389 is opened on user's IP on pfSense firewall. After some amount of time, firewall rule is removed for that IP I would like to know the following: Is this a typical setup? If it is a bad idea, please explain why. If it is possible, are there any packages that assist with this? Specifically, the third step, where the appropriate firewall rule would need to be added... Edit: I am aware of TS Web Gateway, but I want the users to be able to use the traditional RDP client...

    Read the article

  • Requirement refinement between two levels of specification

    - by user107149
    I am currently working on the definition of the documentation architecture of a system, from customers needs to software/hardware requirements. I encounter a big problem with the level of refinement of requirements. The classic architecture is : PTS -- SSS -- SSDD -- SRS/HRS with PTS : Purshaser Technical Specification SSS : Supplier System Specification SSDD : System Segment Design Description SRS / HRS : Software / Hardware Requirement Specification. Requirements from PTS are reworked in SSS, this document only expressed the needs (no design requirements are defined at this level). Then, the system design is described in SSDD : we allocate requirements from the SSS to functions from the design and functions are then allocated to component (Software or hardware) (we are still at the SSDD level). Finally, for each component, we write one SRS or one HRS. Requirements in SRS or HRS are refinement of requirements from SSS (and traceability matrix are made between these two levels). My problem is the following one : Our system is a complex one, and some of the requirements in the SSS needs to be refined twice to be at the right level in the SRS (means that software people can understand the requirement to make their coding). But, with this document architecture, I can only refine once the requirements from the SSS. The second problem is that only a part of the requirements from the SSS needs to be refined twice. The other part only need one refinement. On the picture below, the green boxes are requirements at the right level for SRS or HRS. And purple boxes are intermediate requirements which can not be included in SSS since they are design requirements. Where can I put these purple requirements ?? Is there someone who has already encountered this problem ? Should I write two documents at SRS level ? Should I include intermediate requirements in SSDD ? Should I includes the two refinement levels (purple and green) in the same SRS document (not sure that's possible since a SRS is only for one component) ??? Thanks for your help and expertise ;-)

    Read the article

  • How bad is it to have two methods with the same name but different signatures in two classes?

    - by Super User
    I have a design problem related to a public interface, the names of methods, and the understanding of my API and code. I have two classes like this: class A: ... function collision(self): .... ... class B: .... function _collision(self, another_object, l, r, t, b): .... The first class has one public method named collision, and the second has one private method called _collision. The two methods differs in argument type and number. As an example let's say that _collision checks if the object is colliding with another object with certain conditions l, r, t, b (collide on the left side, right side, etc) and returns true or false. The public collision method, on the other hand, resolves all the collisions of the object with other objects. The two methods have the same name because I think it's better to avoid overloading the design with different names for methods that do almost the same thing, but in distinct contexts and classes. Is this clear enough to the reader or I should change the method's name?

    Read the article

  • How bad it's have two methods with the same name but differents signatures in two classes?

    - by Super User
    I have a design problem relationated with the public interface, the names of methods and the understanding of my API and my code. I have two classes like this: class A: ... function collision(self): .... ... class B: .... function _collision(self, another_object, l, r, t, b): .... The first class have one public method named collision and the second have one private method called _collision. The two methods differs in arguments type and number. In the API _m method is private. For the example let's say that the _collision method checks if the object is colliding with another_ object with certain conditions l, r, t, b (for example, collide the left side, the right side, etc) and returns true or false according to the case. The collision method, on the other hand, resolves all the collisions of the object with other objects. The two methods have the same name because I think is better avoid overload the design with different names for methods who do almost the same think, but in distinct contexts and classes. This is clear enough to the reader or I should change the method's name?

    Read the article

  • Conflict resolution for two-way sync

    - by K.Steff
    How do you manage two-way synchronization between a 'main' database server and many 'secondary' servers, in particular conflict resolution, assuming a connection is not always available? For example, I have an mobile app that uses CoreData as the 'database' on the iOS and I'd like to allow users to edit the contents without Internet connection. In the same time, this information is available on a website the devices will connect to. What do I do if/when the data on the two DB servers is in conflict? (I refer to CoreData as a DB server, though I am aware it is something slightly different.) Are there any general strategies for dealing with this sort of issue? These are the options I can think of: 1. Always use the client-side data as higher-priority 2. Same for server-side 3. Try to resolve conflicts by marking each field's edit timestamp and taking the latest edit Though I'm certain the 3rd option will open room for some devastating data corruption. I'm aware that the CAP theorem concerns this, but I only want eventual consistency, so it doesn't rule it out completely, right? Related question: Best practice patterns for two-way data synchronization. The second answer to this question says it probably can't be done.

    Read the article

  • Algorithm to match timestamped events from two sources

    - by urza.cc
    I have two different physical devices (one is camera, one is other device) that observe the same scene and mark when a specified event occures. (record timestamp) So they each produce a serie of timestamps "when the event was observed". Theoretically the recorded timestamps should be very well aligned: Visualized ideal situation on two time lines "s" and "r" as recorded from the two devices: but more likely they will not be so nicely aligned and there might be missing events from timeline s or r: I am looking for algorithm to match events from "s" and "r" like this: So that the result will be something like: (s1,null); (s2,r1); (s3,null); (s4,r2); (s5,r3); (null,r4); (s6,r5); Or something similar. Maybe with some "confidence" rating. I have some ideas, but I feel that this might be probably a well known problem, that has some good known solutions, but I don't know the right terminology. I am a little bit out of my element here, this is not my primary area of programming.. Any helps, suggestions etc will be appreciated.

    Read the article

  • Software architecture for two similar classes which require different input parameters for the same method

    - by I Like to Code
    I am writing code to simulate a supply chain. The supply chain can be simulated in either an intermediate stocking or a cross-docking configuration. So, I wrote two simulator objects IstockSimulator and XdockSimulator. Since the two objects share certain behaviors (e.g. making shipments, demand arriving), I wrote an abstract simulator object AbstractSimulator which is a parent class of the two simulator objects. The abstract simulator object has a method runSimulation() which takes an input parameter of class SimulationParameters. Up till now, the simulation parameters only contains fields that are common to both simulator objects, such as randomSeed, simulationStartPeriod and simulationEndPeriod. However, I now want to include fields that are specific to the type of simulation that is being run, i.e. an IstockSimulationParameters class for an intermediate stocking simulation, and a XdockSimulationParameters class for a cross-docking simulation. My current idea is take the method runSimulation() out of the AbstractSimulator class, but to put a runSimulation(IstockSimulationParameters) method in the IstockSimulator class, and a runSimulation(XdockSimulationParameters) method in the IstockSimulator class. I am worried however, that this approach will lead to code duplication. What should I do?

    Read the article

  • Two components offering the same functionality, required by different dependencies

    - by kander
    I'm building an application in PHP, using Zend Framework 1 and Doctrine2 as the ORM layer. All is going well. Now, I happened to notice that both ZF1 and Doctrine2 come with, and rely on, their own caching implementation. I've evaluated both, and while each has its own pro's and cons, neither of them stand out as superior to the other for my simple needs. Both libraries also seem to be written against their respective interfaces, not their implementations. Reasons why I feel this is an issue is that during the bootstrapping of my application, I have to configure two caching drivers - each with its own syntax. A mismatch is easily created this way, and it feels inefficient to set up two connections to the caching backend because of this. I'm trying to determine what the best way forward is, and would welcome any insights you may be able to offer. What I've thought up so far are four options: Do nothing, accept that two classes offering caching functionality are present. Create a Facade class to stick Zend's interface onto Doctrine's caching implementation. Option 2, the other way around - create a Facade to map Doctrine's interface on a Zend Framework backend. Use multiple-interface-inheritance to create one interface to rule them all, and pray that there aren't any overlaps (ie: if both have a "save" method, they'll need to accept params in the same order due to PHP's lack of proper polymorphism). What option is best, or is there a "None of the above" variant that I'm not aware of?

    Read the article

  • Two Sessions All Humans Should Watch Right Now

    - by Geertjan
    At conferences, I definitely prefer technical sessions over any other kind of session. That's partly because I want to walk away from a conference with new libraries and APIs to play with, such as the AT&T ARO tool that I've been blogging about over the past few days thanks to being introduced to it in a great session by Doug Sillars at Oredev, in Malmo, Sweden. I only say the above to set the scene. And the scene is that I avoid sessions that deal with "agile topics" or whatever that means. I mean those sessions where you're meant to reflect on some way you're developing nothing in particular and then come away with new ways of doing that. I avoid those. Not because I don't necessarily like those or think I have nothing to learn, both of which I don't (or do, depending on how you read double negatives), but because there are so many sessions to attend that I focus on those that actually give me more technical knowledge that I can do something with immediately. Having said all that, here's two absolutely wonderful sessions (and probably many more but I really liked these two) presented at Oredev over the last few days, one by JB Rainsberger and the other by Woody Zuill, both very nice people who I met for the first time during the last few days, and who aren't paying me to promote them, and who're still struggling to figure out how to say my name. Whether you're a developer or manager or whatever you are, take this on trust, and simply watch these screencasts, hey, at most you're going to lose two hours of your life that you would've spent doing something else: Speaking for myself, I'm going to be watching both these presentations again several times in my life, that's for sure.

    Read the article

  • Am I running out of memory or do I have two logical drives instead of one

    - by user30904
    I did a complete reinstall of Ubuntu 13.04 a couple of months ago. Since then, I have switched out my motherboard with another. I kept the same hard drive. I just did an upgrade to 13.10. Recently, after this install, I keep getting the message that I'm running out of memory. I just checked my system usage and was surprised by what I found. I believed that I installed Ubuntu as a fresh install but when I check the system usage, it seems like there are two logical drives. I just did the basic install, so I was only expecting to see one partition but instead I see two. One is a small 300mb partition, the other is a 300gb partition I was expecting. Can anyone tell me if I have two partitions and/or logical drives and if so how I can fix this? I seem to have been running on the smaller drive and now I'm obviously out of space. I want to be able to use the bigger one at least.

    Read the article

  • Detect multitouch (two fingers touch) on a sprite to apply pinch zoom behaviour

    - by Tahreem
    I am using andengine, want to move, zoom and rotate multiple sprites individually on a scene. I have achieved "move" but for pinch zoom i am unable to get the event to two fingers' touch. Below is the code: public class Main extends SimpleBaseGameActivity { private Camera camera; private BitmapTextureAtlas mBitmapTextureAtlas; private ITextureRegion mFaceTextureRegion; private ITextureRegion mFaceTextureRegion2; Sprite face2; private static final int CAMERA_WIDTH = 800; private static final int CAMERA_HEIGHT = 480; @Override public EngineOptions onCreateEngineOptions() { camera = new Camera(0, 0, CAMERA_WIDTH, CAMERA_HEIGHT); EngineOptions engineOptions = new EngineOptions(true, ScreenOrientation.LANDSCAPE_FIXED, new RatioResolutionPolicy( CAMERA_WIDTH, CAMERA_HEIGHT), camera); return engineOptions; } @Override protected void onCreateResources() { BitmapTextureAtlasTextureRegionFactory.setAssetBasePath("gfx/"); this.mBitmapTextureAtlas = new BitmapTextureAtlas( this.getTextureManager(), 1024, 1600, TextureOptions.NEAREST); BitmapTextureAtlasTextureRegionFactory.createTiledFromAsset(this.mBitmapTextureAtlas, // this, "ui_ball_1.png", 0, 0, 1, 1), // this.getVertexBufferObjectManager()); this.mFaceTextureRegion = BitmapTextureAtlasTextureRegionFactory .createFromAsset(this.mBitmapTextureAtlas, this, "ui_ball_1.png", 0, 0); this.mFaceTextureRegion2 = BitmapTextureAtlasTextureRegionFactory .createFromAsset(this.mBitmapTextureAtlas, this, "ui_ball_1.png", 0, 0); this.mBitmapTextureAtlas.load(); this.mEngine.getTextureManager().loadTexture(this.mBitmapTextureAtlas); } @Override protected Scene onCreateScene() { this.mEngine.registerUpdateHandler(new FPSLogger()); final Scene scene = new Scene(); scene.setBackground(new Background(0.09804f, 0.6274f, 0.8784f)); final float centerX = (CAMERA_WIDTH - this.mFaceTextureRegion .getWidth()) / 2; final float centerY = (CAMERA_HEIGHT - this.mFaceTextureRegion .getHeight()) / 2; final Sprite face = new Sprite(centerX, centerY, this.mFaceTextureRegion, this.getVertexBufferObjectManager()) { @Override public boolean onAreaTouched(final TouchEvent pSceneTouchEvent, final float pTouchAreaLocalX, final float pTouchAreaLocalY) { this.setPosition(pSceneTouchEvent.getX() - this.getWidth() / 2, pSceneTouchEvent.getY() - this.getHeight() / 2); return true; } }; face.setScale(2); scene.attachChild(face); scene.registerTouchArea(face); face2 = new Sprite(200, 200, this.mFaceTextureRegion2, this.getVertexBufferObjectManager()) { @Override public boolean onAreaTouched(final TouchEvent pSceneTouchEvent, final float pTouchAreaLocalX, final float pTouchAreaLocalY) { switch(pSceneTouchEvent.getAction()){ case TouchEvent.ACTION_DOWN: int count = pSceneTouchEvent.getMotionEvent().getPointerCount() ; for(int i= 0; i <count; i++) { int id = pSceneTouchEvent.getMotionEvent().getPointerId(i); } break; case TouchEvent.ACTION_MOVE: this.setPosition(pSceneTouchEvent.getX() -this.getWidth() / 2, pSceneTouchEvent.getY()-this.getHeight() / 2); break; case TouchEvent.ACTION_UP: break; } return true; } }; face2.setScale(2); scene.attachChild(face2); scene.registerTouchArea(face2); scene.setTouchAreaBindingOnActionDownEnabled(true); return scene; } } This line int count = pSceneTouchEvent.getMotionEvent().getPointerCount() ; should set 2 to the count variable if i touch the sprite with to fingers, then i can apply zooming functionality (setScale method) on the sprite by getting the distance between the coordinates of two fingers. Can anyone help me? why it does not detect two fingers? and without this how can i zoom the sprite on pinch of two fingers? I am very new to game development, any help would be appreciated. Thanks in advance.

    Read the article

  • Seeking on a Heap, and Two Useful DMVs

    - by Paul White
    So far in this mini-series on seeks and scans, we have seen that a simple ‘seek’ operation can be much more complex than it first appears.  A seek can contain one or more seek predicates – each of which can either identify at most one row in a unique index (a singleton lookup) or a range of values (a range scan).  When looking at a query plan, we will often need to look at the details of the seek operator in the Properties window to see how many operations it is performing, and what type of operation each one is.  As you saw in the first post in this series, the number of hidden seeking operations can have an appreciable impact on performance. Measuring Seeks and Scans I mentioned in my last post that there is no way to tell from a graphical query plan whether you are seeing a singleton lookup or a range scan.  You can work it out – if you happen to know that the index is defined as unique and the seek predicate is an equality comparison, but there’s no separate property that says ‘singleton lookup’ or ‘range scan’.  This is a shame, and if I had my way, the query plan would show different icons for range scans and singleton lookups – perhaps also indicating whether the operation was one or more of those operations underneath the covers. In light of all that, you might be wondering if there is another way to measure how many seeks of either type are occurring in your system, or for a particular query.  As is often the case, the answer is yes – we can use a couple of dynamic management views (DMVs): sys.dm_db_index_usage_stats and sys.dm_db_index_operational_stats. Index Usage Stats The index usage stats DMV contains counts of index operations from the perspective of the Query Executor (QE) – the SQL Server component that is responsible for executing the query plan.  It has three columns that are of particular interest to us: user_seeks – the number of times an Index Seek operator appears in an executed plan user_scans – the number of times a Table Scan or Index Scan operator appears in an executed plan user_lookups – the number of times an RID or Key Lookup operator appears in an executed plan An operator is counted once per execution (generating an estimated plan does not affect the totals), so an Index Seek that executes 10,000 times in a single plan execution adds 1 to the count of user seeks.  Even less intuitively, an operator is also counted once per execution even if it is not executed at all.  I will show you a demonstration of each of these things later in this post. Index Operational Stats The index operational stats DMV contains counts of index and table operations from the perspective of the Storage Engine (SE).  It contains a wealth of interesting information, but the two columns of interest to us right now are: range_scan_count – the number of range scans (including unrestricted full scans) on a heap or index structure singleton_lookup_count – the number of singleton lookups in a heap or index structure This DMV counts each SE operation, so 10,000 singleton lookups will add 10,000 to the singleton lookup count column, and a table scan that is executed 5 times will add 5 to the range scan count. The Test Rig To explore the behaviour of seeks and scans in detail, we will need to create a test environment.  The scripts presented here are best run on SQL Server 2008 Developer Edition, but the majority of the tests will work just fine on SQL Server 2005.  A couple of tests use partitioning, but these will be skipped if you are not running an Enterprise-equivalent SKU.  Ok, first up we need a database: USE master; GO IF DB_ID('ScansAndSeeks') IS NOT NULL DROP DATABASE ScansAndSeeks; GO CREATE DATABASE ScansAndSeeks; GO USE ScansAndSeeks; GO ALTER DATABASE ScansAndSeeks SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE ScansAndSeeks SET AUTO_CLOSE OFF, AUTO_SHRINK OFF, AUTO_CREATE_STATISTICS OFF, AUTO_UPDATE_STATISTICS OFF, PARAMETERIZATION SIMPLE, READ_COMMITTED_SNAPSHOT OFF, RESTRICTED_USER ; Notice that several database options are set in particular ways to ensure we get meaningful and reproducible results from the DMVs.  In particular, the options to auto-create and update statistics are disabled.  There are also three stored procedures, the first of which creates a test table (which may or may not be partitioned).  The table is pretty much the same one we used yesterday: The table has 100 rows, and both the key_col and data columns contain the same values – the integers from 1 to 100 inclusive.  The table is a heap, with a non-clustered primary key on key_col, and a non-clustered non-unique index on the data column.  The only reason I have used a heap here, rather than a clustered table, is so I can demonstrate a seek on a heap later on.  The table has an extra column (not shown because I am too lazy to update the diagram from yesterday) called padding – a CHAR(100) column that just contains 100 spaces in every row.  It’s just there to discourage SQL Server from choosing table scan over an index + RID lookup in one of the tests. The first stored procedure is called ResetTest: CREATE PROCEDURE dbo.ResetTest @Partitioned BIT = 'false' AS BEGIN SET NOCOUNT ON ; IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; IF @Partitioned = 'true' BEGIN -- Enterprise, Trial, or Developer -- required for partitioning tests IF SERVERPROPERTY('EngineEdition') = 3 BEGIN EXECUTE (' DROP TABLE dbo.Example ; IF EXISTS ( SELECT 1 FROM sys.partition_schemes WHERE name = N''PS'' ) DROP PARTITION SCHEME PS ; IF EXISTS ( SELECT 1 FROM sys.partition_functions WHERE name = N''PF'' ) DROP PARTITION FUNCTION PF ; CREATE PARTITION FUNCTION PF (INTEGER) AS RANGE RIGHT FOR VALUES (20, 40, 60, 80, 100) ; CREATE PARTITION SCHEME PS AS PARTITION PF ALL TO ([PRIMARY]) ; CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ON PS (key_col); '); END ELSE BEGIN RAISERROR('Invalid SKU for partition test', 16, 1); RETURN; END; END ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; END; GO The second stored procedure, ShowStats, displays information from the Index Usage Stats and Index Operational Stats DMVs: CREATE PROCEDURE dbo.ShowStats @Partitioned BIT = 'false' AS BEGIN -- Index Usage Stats DMV (QE) SELECT index_name = ISNULL(I.name, I.type_desc), scans = IUS.user_scans, seeks = IUS.user_seeks, lookups = IUS.user_lookups FROM sys.dm_db_index_usage_stats AS IUS JOIN sys.indexes AS I ON I.object_id = IUS.object_id AND I.index_id = IUS.index_id WHERE IUS.database_id = DB_ID(N'ScansAndSeeks') AND IUS.object_id = OBJECT_ID(N'dbo.Example', N'U') ORDER BY I.index_id ; -- Index Operational Stats DMV (SE) IF @Partitioned = 'true' SELECT index_name = ISNULL(I.name, I.type_desc), partitions = COUNT(IOS.partition_number), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; ELSE SELECT index_name = ISNULL(I.name, I.type_desc), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; END; The final stored procedure, RunTest, executes a query written against the example table: CREATE PROCEDURE dbo.RunTest @SQL VARCHAR(8000), @Partitioned BIT = 'false' AS BEGIN -- No execution plan yet SET STATISTICS XML OFF ; -- Reset the test environment EXECUTE dbo.ResetTest @Partitioned ; -- Previous call will throw an error if a partitioned -- test was requested, but SKU does not support it IF @@ERROR = 0 BEGIN -- IO statistics and plan on SET STATISTICS XML, IO ON ; -- Test statement EXECUTE (@SQL) ; -- Plan and IO statistics off SET STATISTICS XML, IO OFF ; EXECUTE dbo.ShowStats @Partitioned; END; END; The Tests The first test is a simple scan of the heap table: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example'; The top result set comes from the Index Usage Stats DMV, so it is the Query Executor’s (QE) view.  The lower result is from Index Operational Stats, which shows statistics derived from the actions taken by the Storage Engine (SE).  We see that QE performed 1 scan operation on the heap, and SE performed a single range scan.  Let’s try a single-value equality seek on a unique index next: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 32'; This time we see a single seek on the non-clustered primary key from QE, and one singleton lookup on the same index by the SE.  Now for a single-value seek on the non-unique non-clustered index: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32'; QE shows a single seek on the non-clustered non-unique index, but SE shows a single range scan on that index – not the singleton lookup we saw in the previous test.  That makes sense because we know that only a single-value seek into a unique index is a singleton seek.  A single-value seek into a non-unique index might retrieve any number of rows, if you think about it.  The next query is equivalent to the IN list example seen in the first post in this series, but it is written using OR (just for variety, you understand): EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32 OR data = 33'; The plan looks the same, and there’s no difference in the stats recorded by QE, but the SE shows two range scans.  Again, these are range scans because we are looking for two values in the data column, which is covered by a non-unique index.  I’ve added a snippet from the Properties window to show that the query plan does show two seek predicates, not just one.  Now let’s rewrite the query using BETWEEN: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data BETWEEN 32 AND 33'; Notice the seek operator only has one predicate now – it’s just a single range scan from 32 to 33 in the index – as the SE output shows.  For the next test, we will look up four values in the key_col column: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col IN (2,4,6,8)'; Just a single seek on the PK from the Query Executor, but four singleton lookups reported by the Storage Engine – and four seek predicates in the Properties window.  On to a more complex example: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WITH (INDEX([PK dbo.Example key_col])) WHERE key_col BETWEEN 1 AND 8'; This time we are forcing use of the non-clustered primary key to return eight rows.  The index is not covering for this query, so the query plan includes an RID lookup into the heap to fetch the data and padding columns.  The QE reports a seek on the PK and a lookup on the heap.  The SE reports a single range scan on the PK (to find key_col values between 1 and 8), and eight singleton lookups on the heap.  Remember that a bookmark lookup (RID or Key) is a seek to a single value in a ‘unique index’ – it finds a row in the heap or cluster from a unique RID or clustering key – so that’s why lookups are always singleton lookups, not range scans. Our next example shows what happens when a query plan operator is not executed at all: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 8 AND @@TRANCOUNT < 0'; The Filter has a start-up predicate which is always false (if your @@TRANCOUNT is less than zero, call CSS immediately).  The index seek is never executed, but QE still records a single seek against the PK because the operator appears once in an executed plan.  The SE output shows no activity at all.  This next example is 2008 and above only, I’m afraid: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WHERE key_col BETWEEN 1 AND 30', @Partitioned = 'true'; This is the first example to use a partitioned table.  QE reports a single seek on the heap (yes – a seek on a heap), and the SE reports two range scans on the heap.  SQL Server knows (from the partitioning definition) that it only needs to look at partitions 1 and 2 to find all the rows where key_col is between 1 and 30 – the engine seeks to find the two partitions, and performs a range scan seek on each partition. The final example for today is another seek on a heap – try to work out the output of the query before running it! EXECUTE dbo.RunTest @SQL = 'SELECT TOP (2) WITH TIES * FROM Example WHERE key_col BETWEEN 1 AND 50 ORDER BY $PARTITION.PF(key_col) DESC', @Partitioned = 'true'; Notice the lack of an explicit Sort operator in the query plan to enforce the ORDER BY clause, and the backward range scan. © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • Solution 6 : Kill a Non-Clustered Process during Two-Node Cluster Failover

    - by StanleyGu
    Using Visual Studio 2008 and C#, I developed a windows service A and deployed it to two nodes of a windows server 2008 failover cluster. The service A is part of the failover cluster service, which means, when failover occurs at node1, the cluster service will failover the windows service A from node 1 to node 2. One of the tasks implemented by the windows service A is to start, monitor or kill a process B. The process B is installed to the two nodes but is not part of the failover cluster service. When a failover occurs at node1, the cluster service does not failover the process B from node 1 to node 2, and the process B continues running at node1. The requirement is: When failover occurs at node1, we want the process B running at node1 gets killed, but we do not want the process B be part of the failover cluster service. The first idea that pops up immediately is to put some code in an event handler triggered by the failover in the windows service A. The failover effect to the windows service A is similar to using the task manager to kill the process of the windows service A, but there is no event in windows service that can be triggered by killing the process of the window service. The events related to terminating a windows service are OnStop and OnShutDown, but killing the process of windows service A triggers neither of them. The OnStop event can only be triggered by stopping the windows service using Services Control Manager or Services Management Console. Apparently, the first idea is not feasible. The second idea that emerges is to put code into the OnStart event handler of the windows service A. When failover occurs at node 1, the windows service A is killed at node 1 and started at node 2. During the starting, the windows service A at node 2 kills the process B that is running at node 1. It is a workaround and works very well. The C# code implementation within the OnStart event handler is as following: 1.       Capture server names of the two nodes from App.config 2.       Determine server name of the remote node. 3.       Kill the process B running on the remote node. Check here for sample code.  

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >