Search Results

Search found 15935 results on 638 pages for 'background color'.

Page 399/638 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • 11.10 liveCD black screen

    - by Shaun Killingbeck
    Attempting to install/try ubuntu 11.10 on my new laptop, using a liveCD (and tried USB). I get the purple screen (with the man/keyboard at the bottom) and after that the screen flashes bright white before going black. Ubuntu continues to load in the background, with login sound etc but the screen is off. I have tried as many different solutions as I could find including: using nomodestep, xforcevesa, i915.modeset=0 in boot options (seperately): varying consequences, but either I end up at a blinking cursor with no prompt, a command line (startx fails: no screen found), or the original blank screen again Tried booting from VirtualBox - it crashes at the same place the screen would go blank when using a CD/USB tried 11.04: I don't have this problem BUT when trying to install, I get a ubi-partman error 141 (possibly down to the three partitions that came on my laptop... not sure why HP needed there own separate partition for HP Tools...) Model: HP Pavillion DV6 6B08SA Processor: AMD Quad-Core A6-3410MX APU with Radeon HD 6545G2 Dual Graphics (1.6 GHZ 4 MB L2 cache ) Chipset: AMD RS880M Any help would be greatly appreciated. I just want to be able to partition the drive and install Ubuntu. I'm assuming the issue is graphics card related, although I have no confirmation of that. I have caught a glimpse of some output to do with pulseaudio and [fail], but I can't imagine why that would cause a screen problem and the sound definitely works anyway.

    Read the article

  • Effect of using dedicated NVidia card instead of Intel HD4000

    - by Sman789
    Short version: Can someone please advise me of the effect of adding a dedicated NVIDIA GeForce GT 630M card to an Ubuntu laptop in terms of power consumption and performance gains/losses when doing general productivity tasks and booting up. Also, how good are the closed source, open source, and Bumblebee drivers for these newer cards compared to support for the Intel HD4000? Long version/Background, if any info here is helpful: I'm thinking of ordering a laptop from PC Specialist (a UK company who actually sell machines without Windows pre-installed) with the following specifications: Genesis IV: 15.6" AUO Matte 95% Gamut LED Widescreen (1920x1080) Intel® Core™i5 Dual Core Mobile Processor i5-3210M (2.50GHz) 3MB 4GB SAMSUNG 1600MHz SODIMM DDR3 MEMORY (1 x 4GB) 120GB INTEL® 520 SERIES SSD, SATA 6 Gb/s (upto 550MB/sR | 520MB/sW) Intel 2 Channel High Definition Audio + MIC/Headphone Jack GIGABIT LAN & WIRELESS INTEL® N135 802.11N (150Mbps) + BLUETOOTH Now, as I want this laptop mainly for work and not for games, I would be more than content with the HD4000 integrated chip which comes with the processor. However, for compatibility reasons, I am not able to get the specs I want unless I choose a NVIDIA GeForce GT 630M 1GB graphics card, which I don't have a great deal of use for. I'm willing to buy it, however, as it's still cheaper than any other laptop with the specs I want. However, I know that Linux power management isn't fantastic with open-source graphics drivers, and I don't much about Bumblebee. Basically, whilst I'm happy to 'tolerate' the card being there, I don't want to experience any negative effects on the rest of my system (battery, performance etc) and if there are likely to be any, I might reconsider my purchase. So if anyone can advise me on the effects, I would be very grateful, since I doubt I can just turn the card off. Thankyou for any assistance :)

    Read the article

  • Advanced PHP book [closed]

    - by Aaditi Sharma
    I've gone and stumbled across a lot of recommendations for PHP books, including on SO, however could not find a reasonable & convincible answer for this. Is there a really good advanced book for PHP. Background: I've done almost 8 months in PHP. I know the basics. I go through php.net very often. I've played around with Codeigniter, amongst other frameworks. I've been doing JavaScript for almost 2 years, and specifically thank Douglas Crockford for this, I completely changed the way I code JavaScript. I spend a lot of time travelling, and would love to read a book about PHP, that includes the awesome parts and even when something doesn't quite work in PHP. (As a note a lot of previous answers on SO and programmers give varied results.) I have to place an order through a library which has it's limitations. One book that some of experienced PHP programmers could recommend would be helpful. I have gone through http://stackoverflow.com/questions/1711/what-is-the-single-most-influential-book-every-programmer-should-read and http://stackoverflow.com/questions/194812/list-of-freely-available-programming-books, which do NOT have books related to PHP.

    Read the article

  • How do references work in R?

    - by djechlin
    I'm finding R confusing because it has such a different notion of reference than I am used to in languages like C, Java, Javascript... Ruby, Python, C++, well, pretty much any language I have ever programmed in ever. So one thing I've noticed is variable names are not irrelevant when passing them to something else. The reference can be part of the data. e.g. per this tutorial a <- factor(c("A","A","B","A","B","B","C","A","C")) results <- table(a) Leads to $a showing up as an attribute as $dimnames$a. We've also witnessed that calling a function like a <- foo(alpha=1, beta=2) can create attributes in a of names alpha and beta, or it can assign or otherwise compute on 1 and 2 to properties already existing. (Not that there's a computer science distinction here - it just doesn't really happen in something like Javascript, unless you want to emulate it by passing in the object and use key=value there.) Functions like names(...) return lvalues that will affect the input of them. And the one that most got me is this. x <- c(3, 5, 1, 10, 12, 6) y = x[x <= 5] x[y] <- 0 is different from x <- c(3, 5, 1, 10, 12, 6) x[x <= 5] <- 0 Color me confused. Is there a consistent theory for what's going on here?

    Read the article

  • Service layer coupling

    - by Justin
    I am working on writing a service layer for an order system in php. It's the typical scenario, you have an Order that can have multiple Line Items. So lets say a request is received to store a line item with pictures and comments. I might receive a json request such as { 'type': 'Bike', 'color': 'Red', 'commentIds': [3193,3194] 'attachmentIds': [123,413] } My idea was to have a Service_LineItem_Bike class that knows how to take the json data and store an entity for a bike. My question is, the Service_LineItem class now needs to fetch comments and file attachments, and store the relationships. Service_LineItem seems like it should interact with a Service_Comment and a Service_FileUpload. Should instances of these two other services be instantiated and passed to the Service_LineItem constructor,or set by getters and setters? Dependency injection seems like the right solution, allowing a service access to a 'service fetching helper' seems wrong, and this should stay at the application level. I am using Doctrine 2 as a ORM, and I can technically write a dql query inside Service_LineItem to fetch the comments and file uploads necessary for the association, but this seems like it would have a tighter coupling, rather then leaving this up to the right service object.

    Read the article

  • Space (and pipe sign) works on occasion only

    - by Timo Riikonen
    I have an issue that when I try to write pipe sign "|" and space after that, I sometimes get wrong type of space (\240) and my command fails. This issue persists on different shells. How could I fix this? I am using Finnish keyboard layout. timo@timo-i7-ubuntu:~$ ps -ef | grep ruby timo 7169 2633 0 12:12 pts/2 00:00:00 ruby1.9.1 /usr/local/bin/rails new admin4 timo 8736 26515 0 14:22 pts/4 00:00:00 grep --color=auto ruby timo@timo-i7-ubuntu:~$ ps -ef | grep ruby No command ' grep' found, did you mean: Command 'igrep' from package 'openimageio-tools' (universe) Command 'dgrep' from package 'debian-goodies' (main) Command 'rgrep' from package 'grep' (main) Command 'zgrep' from package 'gzip' (main) Command 'zgrep' from package 'zutils' (universe) Command 'sgrep' from package 'sgrep' (universe) Command 'lgrep' from package 'lv' (universe) Command 'egrep' from package 'grep' (main) Command 'ngrep' from package 'ngrep' (universe) Command 'grep' from package 'grep' (main) Command 'agrep' from package 'agrep' (multiverse) Command 'pgrep' from package 'procps' (main) Command 'xgrep' from package 'xgrep' (universe) Command 'vgrep' from package 'atfs' (universe) Command 'fgrep' from package 'grep' (main)  grep: command not found timo@timo-i7-ubuntu:~$ cat pipecom ps -ef | grep rails timo@timo-i7-ubuntu:~$ cat pipecom2 ps -ef | grep rails timo@timo-i7-ubuntu:~$ ./pipecom timo 7169 2633 0 12:12 pts/2 00:00:00 ruby1.9.1 /usr/local/bin/rails new admin4 timo 8777 8775 0 14:26 pts/4 00:00:00 grep rails timo@timo-i7-ubuntu:~$ ./pipecom2 ./pipecom2: line 1: $'\302\240grep': command not found timo@timo-i7-ubuntu:~$ diff -w pipecom pipecom2 1c1 < ps -ef | grep rails --- > ps -ef | grep rails

    Read the article

  • relationship between the model and the renderer

    - by acrilige
    I tried to build a simple graphics engine, and faced with this problems: i have a list of models that i need to draw, and object (renderer) that implements IRenderer interface with method DrawObject(Object* obj). Implementation of renderer depends on using graphics library (opengl/directx). 1st question: model should not know nothing about renderer implementation, but in this case where can i hold (cache) information that depends on renderer implementation? For example, if model have this definition: class Model { public: Model(); Vertex* GetVertices() const; private: Vertex* m_vertices; }; what is the best way to cache, for example, vertex buffer of this model for dx11? Hold it in renderer object? 2nd question: what is the best way for model to say renderer HOW it must be rendered (for example with texture, bump mapping, or may be just in one color). I thought it can be done with flags, like this: model-SetRenderOptions(RENDER_TEXTURE | RENDER_BUMPMAPPING | RENDER_LIGHTING); and in Renderer::DrawModel method check for each flag. But looks like it will become uncomfortable with the options count growth...

    Read the article

  • Install tmux on Mac OS X

    - by unixben
    This is a short run down on how to get tmux running on your Mac OS X system. The same methodology applies when compiling this on Solaris. What is tmux? According to the developer's page, "tmux is a terminal multiplexer: it enables a number of terminals (or windows), each running a separate program, to be created, accessed, and controlled from a single screen. tmux may be detached from a screen and continue running in the background, then later reattached". Why not just use screen? For me, the primary reason I switched to tmux from screen is the much easier configuration syntax that tmux offers. If you've ever struggled with formatting screen's caption or hardstatus line, then you will appreciate the ease with which you can achieve the same results in tmux. Preparing your environment You will need a C compiler installed. I believe that OS X ships by default with GNU make, but if not, then you will need to obtain it or use Xcode. Download the sources While I'm putting all this together, I like to keep everything neatly tucked away in a build directory. mkdir ~/build cd ~/build curl -OL http://downloads.sourceforge.net/tmux/tmux-1.5.tar.gz curl -OL http://downloads.sourceforge.net/project/levent/libevent/libevent-2.0/libevent-2.0.16-stable.tar.gz Unpack the sources tar xzf tmux-1.5.tar.gz tar xzf libevent-2.0.16-stable.tar.gz Compiling libevent cd libevent-2.0.16-stable ./configure --prefix=/opt make sudo make install Compiling tmux cd ../tmux-1.5 LDFLAGS="-L/opt/lib" CPPFLAGS="-I/opt/include" LIBS="-lresolv" ./configure --prefix=/opt make sudo make install That's all there is to it!

    Read the article

  • TransformXml Task locks config file identified in Source attribute

    - by alexhildyard
    As background: the TransformXml MSBuild task is typically invoked in a custom build step to mark up a web.config file with per-environment configuration; its flexible directives offer highly granular control over the insertion, removal, substitution and transformation of existing configuration hierarchies. For those using the TransformXML task (typically in a Web Deployment Project) I raised an issue against Visual Studio 2010, in which the file handle on the input file was not released, meaning that following transformation the source file remained locked. As a result, the best way to transform a file was first to rename it, transform it, and then copy it back, leaving the "locked" file to be freed up later.I just heard today that this has now been resolved in Visual Studio 2012 RTM. That's good news, because Web Config Transformations offer a lot. An intelligent, automated build process will swap in the relevant transform(s), making it much easier to synthesise the Developer and Build server builds. This makes for a simpler and more exemplary build process, and with the tighter coupling comes a correspondingly quicker response to Developmental change.Oh, and don't forget -- it isn't just web.configs you can transform. You can transform app.configs, or indeed any XML file that honours the task's schema and hierarchical rules.

    Read the article

  • I need some career/education advice regarding computer science [on hold]

    - by user2521987
    So I'm a senior mathematics major this fall and I have only taken three CS classes (Java I, Java II, and C++). This summer, I am participating in a mathematics REU (Research Experience for Undergraduates), and I program in C++ about 8 hours a day...and I find that I absolutely love it. I love using programming to solve math problems in my research. I think I want to pursue a career in programming. I have a few options Stay at my university an extra 1-1.5 years (beyond the 4) and do a double major in Math/CS. This will put me in up to around 7-10k in debt (currently I have no debt and am scheduled to graduate debt free). Then apply to a masters in CS. Apply directly to a masters in CS from a math undergraduate degree. I don't like this idea because I likely won't get into a good program or funded with such little background. Go to graduate school, funded, in applied mathematics and try to further my knowledge in computer science while there. Then apply to a masters in CS. I'm not sure if 1 or 3 would be better. My end goal would be to go to a top 20-30 CS graduate program and to get a cool, good job. What would you recommend?

    Read the article

  • Greeter login screen cropped login options in 12.04

    - by ammianus
    I have a pretty newly installed Ubuntu 12.04, using Unity. My external monitor is 1920x1080 max resolution. In the Unity desktop itself everything looks great. I have an NVidia graphics card. When I start my computer and get to the Unity greeter login screen the display is oddly formatted and the resolution seems off. It looks like a zoomed view on the larger 1920x1080 screen. As such it crops the login options off to the left hand side of the screen. So I can only just see the edge of the password box for the user I want to log in with. I can log in with one account by default by blindly typing the password, but I am unable to switch to other accounts. Is there anything I can do to fix the log in screen display so that I can see the normal login options? Note: I first noticed it when I changed my desktop background and the next time I logged in I saw the issue.

    Read the article

  • Add control to grid from code behind in Silverlight

    - by Emanuele Bartolesi
    In this post I show how you can easily add a control to a silverlight grid layout from code behind. First you draw the grid in the xaml. <Grid x:Name="LayoutRoot" Background="Red"> <Grid.RowDefinitions> <RowDefinition Height="20"> </RowDefinition> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="300"> </ColumnDefinition> </Grid.ColumnDefinitions> </Grid> Now in the page constructor add the following code. public MainPage() { InitializeComponent(); var myButton = new Button { Name = "btnOk", Content = "Ok", }; myButton.SetValue(Grid.RowProperty, 1); myButton.SetValue(Grid.ColumnProperty, 1); myButton.Click += myButton_Click; LayoutRoot.Children.Add(myButton); } Also add the evento of the button. void myButton_Click(object sender, RoutedEventArgs e) { } The code needs no comment because it’s very simple. The only important thing is the method SetValue because it is used to set XAML attribute of element. For a better understanding I have created an example that you can download from here.

    Read the article

  • how to properly implement alpha blending in a complex 3d scene

    - by Gajet
    I know this question might sound a bit easy to answer but It's driving me crazy. There are too many possible situations that a good alpha blending mechanism should handle, and for each Algorithm I can think of there is something missing. these are the methods I've though about so far: first of I though about object sorting by depth, this one simply fails because Objects are not simple shapes, they might have curves and might loop inside each other. so I can't always tell which one is closer to camera. then I thought about sorting triangles but this one also might fail, thought I'm not sure how to implement it there is a rare case that might again cause problem, in which two triangle pass through each other. again no one can tell which one is nearer. the next thing was using depth buffer, at least the main reason we have depth buffer is because of the problems with sorting that I mentioned but now we get another problem. Since objects might be transparent, in a single pixel there might be more than one object visible. So for which Object should I store pixel depth? I then thought maybe I can only store the most front Object depth, and using that determine how should I blend next draw calls at that pixel. But again there was a problem, think about 2 semi transparent planes with a solid plane in middle of them. I was going to render the solid plane at the end, one can see the most distant plane. note that I was going to merge every two planes until there is only one color left for that pixel. Obviously I can use sorting methods too because of the same reasons I've explained above. Finally the only thing I imagine being able to work is to render all objects into different render targets and then sort those layers and display the final output. But this time I don't know how can I implement this algorithm.

    Read the article

  • How can I ensure my Collada model fits on an iPhone screen?

    - by rakeshNS
    Hi I am new to game development. I see many examples and tried myself like displaying triangle, cube etc. Now I am looking to render a Collada object. So I created a Collada object using Google Sketch up and trying to render that now. But the thing I am not understanding is, in all examples the vertices are between -1.0 and +1.0 values. But when I looked into that Collada file, the vertices were ranging from -30.0 to 90.0. I know any vertices greater than 1.0 will not display on iPhone. So can you pleas tell my the secret behind converting Object coordinate to normalized vector coordinate? My previous triangle defined as struct Vertex{ float Position[3]; float Color[4]; }; const Vertex Vertices[] = { {{-0.5, -0.866}, {1, 1, 0.5f, 1}}, {{0.5, -0.866}, {1, 1, 0.5, 1}}, {{0, 1}, {1, 1, 0.5, 1}}, {{-0.5, -0.866}, {0.5f, 0.5f, 0.5f}}, {{0.5, -0.866}, {0.5f, 0.5f, 0.5f}}, {{0, -0.4f}, {0.5f, 0.5f, 0.5f}}, }; And now triangle from collada is const Vertex Vertices[] = { {{39.4202092, 90.1263924, 0.0000000}, {1, 1, 0.5f, 1}}, {{-20.2205588, 90.1263924, 0.0000000}, {1, 1, 0.5, 1}}, {{-20.2205588, 176.3763924, 0.0000000}, {1, 1, 0.5, 1}}, {{-20.2205588, 176.3763924, 0.0000000}, {1, 1, 0.5, 1}}, {{-20.2205588, 90.1263924, 0.0000000}, {1, 1, 0.5, 1}}, {{39.4202092, 90.1263924, 0.0000000}, {1, 1, 0.5, 1}}, };

    Read the article

  • How important is Domain knowledge vs. Technical knowledge?

    - by Mayank
    I am working on a Trading and Risk Management application and although from a C# background, I have been asked to work on SSIS packages. Now I can live with that. The pain point is that there is too much emphasis on business understanding. Trading (Energy Trading to be exact) is a HUGE area and understanding every little bit of it is overwhelming. But for the past two months I have been working on understanding the business terms - Mark To Market, Risk Metrics, Positions, PnL, Greeks, Instruments, Book Structure... every little detail (you get the point). Now IMHO, this is the job of a BA. Sure it is very important for developers to understand the business but where do you draw the line? When I talked to my manager about this, he almost mocked me by saying that anybody can learn a technology in a week. It's the business that's harder. My long term aspiration is to remain on the technical side, probably become an architect (if possible). If I wanted to focus so much on business I would have pursued an MBA! I want to know if I am wrong or too naive in understanding the business importance or is my frustration justified?

    Read the article

  • Collision detection with non-rectangular images

    - by Adam Smith
    I'm creating a game and I need to detect collisions between a character and some parts of the environment. Since my character's frames are taken from a sprite sheet with a transparent background, I'm wondering how I should go about detecting collisions between a wall and my character only if the colliding parts are non-transparent in both images. I thought about checking only if part of the rectangle the character is in touches the rectangle a tile is in and comparing the alpha channels, but then I have another choice to make... Either I test every single pixel against every single pixel in the other image and if one is true, I detect a collision. That would be terribly ineficient. The other option would be to keep a x,y position of the leftmost, rightmost, etc. non-transparent pixel of each image and compare those instead. The problem with this one might be that, for instance, the character's hand could be above a tile (so it would be in a transparent zone of the tile) but a pixel that is not the rightmost could touch part of the tile without being detected. Another problem would be that in different frames, the rightmost, leftmost, etc. pixels might not be at the same position. Should I not bother with that and just check the collisions on the rectangles? It would be simpler, but I'm afraid people.will feel that there are collisions sometimes that shouldn't happen.

    Read the article

  • Wirelessly Sync Photos From iPhone To Computer Using CameraSync

    - by Gopinath
    How do you upload photos captured on your iOS device to your computer? By connecting the device using a cable and then syncing up with an app?? Ah..is’nt it a boring way. Here comes CameraSync – an app that lets you wirelessly send your iOS device photos to DropBox, so that you can access on your computer irrespective of the platform (Windows, Mac, Linux). By the way, this app works in the background and syncs the  files without disturbing  you. You don’t like DropBox? CameraSync works with a variety  of cloud services : Flickr, Amazon S3, iDisk, FTP and Box.net. If you looking for a step by step guide on how to setup CameraSync for DropBox then check this post. CameraSync cost $1.99 and runs on iOS4.0+ devices. CameraSync [iTunes App via Lifehacker] This article titled,Wirelessly Sync Photos From iPhone To Computer Using CameraSync, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Silverlight 4 &ndash; Coded UI Framework Video Tutorial

    - by mbcrump
    With the release of Visual Studio 2010 Feature Pack 2, Microsoft included the Coded UI Test framework. With this release it is possible to create automated test with just a few mouse clicks. This is a very powerful feature that all Silverlight developers need to learn. Instead of my normal blog post, I have created a video tutorial that walks you through it starting from “File” –> New Project. I hope you enjoy and please leave feedback. Video Tutorial (short 9 minute video): Slides from the demo (only 3): Silverlight 4 – Coded UI Testing Code for the MainPage.xaml that was used in the Demo. For the sake of time, I did not go into the AutomationProperties.Name that I used for the TextBox or Button. I added that for each element . <Grid x:Name="LayoutRoot" Background="White" Height="100" Width="350"> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <TextBlock Padding="15" Grid.Column="0" TextAlignment="Right">Name</TextBlock> <TextBox AutomationProperties.Name="txtAP" Grid.Column="1" Height="25" TextAlignment="Right" Name="txtName" /> <Button AutomationProperties.Name="btnAP" Grid.Row="1" Grid.Column="1" Content="Click for Name" x:Name="btnMessage" Click="btnMessage_Click" /> </Grid>  Subscribe to my feed

    Read the article

  • Visually and audibly unambiguous subset of the Latin alphabet?

    - by elliot42
    Imagine you give someone a card with the code "5SBDO0" on it. In some fonts, the letter "S" is difficult to visually distinguish from the number five, (as with number zero and letter "O"). Reading the code out loud, it might be difficult to distinguish "B" from "D", necessitating saying "B as in boy," "D as in dog," or using a "phonetic alphabet" instead. What's the biggest subset of letters and numbers that will, in most cases, both look unambiguous visually and sound unambiguous when read aloud? Background: We want to generate a short string that can encode as many values as possible while still being easy to communicate. Imagine you have a 6-character string, "123456". In base 10 this can encode 10^6 values. In hex "1B23DF" you can encode 16^6 values in the same number of characters, but this can sound ambiguous when read aloud. ("B" vs. "D") Likewise for any string of N characters, you get (size of alphabet)^N values. The string is limited to a length of about six characters, due to wanting to fit easily within the capacity of human working memory capacity. Thus to find the max number of values we can encode, we need to find that largest unambiguous set of letters/numbers. There's no reason we can't consider the letters G-Z, and some common punctuation, but I don't want to have to go manually pairwise compare "does G sound like A?", "does G sound like B?", "does G sound like C" myself. As we know this would be O(n^2) linguistic work to do =)...

    Read the article

  • How do I become a developer?

    - by ATester
    Since many of you are already there I figured this might be a great place to invite ideas and suggestion for becoming a software developer. My background I have some basic knowledge of programming in VB and C++ from a course I had pursued 8 years ago. My main drawback is a lack of experience in software since I was in a teaching career until a year ago. I am working as a QA tester and finally got a chance to write some automation tests scripts using C#. I didn't have any prior knowledge of C# but I was able to figure my way through it. Questions Given this context: Would anyone have any ideas as to what would be a good approach to learn enough to actually be able to work as a developer? Does anyone have any suggestions as to what kind of learning path to adopt and which approaches speed up learning curve? Would pursuing university course be helpful in terms of knowledge gained? Are certification exams a way to go for a beginner? Are community college courses useful? What about courses offered by private institutes/centers? Any suggestion for some good books?

    Read the article

  • Install a i386 printer driver into an amd64 distribution or how can I find a good printer based on features?

    - by Yanick Rochon
    Hi, I just bought a Lexmark Interpret S408 all-in-one printer. The box said that it supported Ubuntu 8.04, but I told myself it should work with Lucid... well no. The only driver I have found is for i386 while I have a amd64 image installed; the architecture is incompatible. So, the quesiton is : Is it possible to install that driver anyway, somehow? Or do I need to take that printer back to the store and buy another one? If the latter is the only alternative, I need a printer that has wireless connection capability can do color printing is of good price (less than $200 CAD) Thank you for your answers, help, and tips. ** UPDATE ** The driver was given in the form of deb package (for Debian distributions) and I managed to extract the actual deb package driver out of the install program. I ran sudo dpkg -i --force-all lexmark-inkjet-09-driver-1.5-1.i386.deb and the driver installed, and I was able to print something out. But that pretty much ends there; I cannot access anymore of the printer settings, etc. (i.g. scanner, fax, wifi settings, etc.) I should suffice for now as I'm satisfied with the printer's features (and size, and prince), but if I could have a full-linux-supported printer like that one, I would return this one in exchange for the other.

    Read the article

  • How to pass one float as four unsigned chars to shader by glVertexPointAttrib?

    - by Kog
    For each vertex I use two floats as position and four unsigned bytes as color. I want to store all of them in one table, so I tried casting those four unsigned bytes to one float, but I am unable to do that correctly... All in all, my tests came to one point: GLfloat vertices[] = { 1.0f, 0.5f, 0, 1.0f, 0, 0 }; glEnableVertexAttribArray(0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), vertices); // VER1 - draws red triangle // unsigned char colors[] = { 0xff, 0, 0, 0xff, 0xff, 0, 0, 0xff, 0xff, 0, 0, // 0xff }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors); // VER2 - draws greenish triangle (not "pure" green) // float f = 255 << 24 | 255; //Hex:0xff0000ff // float colors2[] = { f, f, f }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors2); // VER3 - draws red triangle int i = 255 << 24 | 255; //Hex:0xff0000ff int colors3[] = { i, i, i }; glEnableVertexAttribArray(1); glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), colors3); glDrawArrays(GL_TRIANGLES, 0, 3); Above code is used to draw one simple red triangle. My question is - why do versions 1 and 3 work correctly, while version 2 draws some greenish triangle? Hex values are one I read by marking variable during debug. They are equal for version 2 and 3 - so what causes the difference?

    Read the article

  • Is there a canonical source supporting "all-surrogates"?

    - by user61852
    Background The "all-PK-must-be-surrogates" approach is not present in Codd's Relational Model or any SQL Standard (ANSI, ISO or other). Canonical books seems to elude this restrictions too. Oracle's own data dictionary scheme uses natural keys in some tables and surrogate keys in other tables. I mention this because these people must know a thing or two about RDBMS design. PPDM (Professional Petroleum Data Management Association) recommend the same canonical books do: Use surrogate keys as primary keys when: There are no natural or business keys Natural or business keys are bad ( change often ) The value of natural or business key is not known at the time of inserting record Multicolumn natural keys ( usually several FK ) exceed three columns, which makes joins too verbose. Also I have not found canonical source that says natural keys need to be immutable. All I find is that they need to be very estable, i.e need to be changed only in very rare ocassions, if ever. I mention PPDM because these people must know a thing or two about RDBMS design too. The origins of the "all-surrogates" approach seems to come from recommendations from some ORM frameworks. It's true that the approach allows for rapid database modeling by not having to do much business analysis, but at the expense of maintainability and readability of the SQL code. Much prevision is made for something that may or may not happen in the future ( the natural PK changed so we will have to use the RDBMS cascade update funtionality ) at the expense of day-to-day task like having to join more tables in every query and having to write code for importing data between databases, an otherwise very strightfoward procedure (due to the need to avoid PK colisions and having to create stage/equivalence tables beforehand ). Other argument is that indexes based on integers are faster, but that has to be supported with benchmarks. Obviously, long, varying varchars are not good for PK. But indexes based on short, fix-length varchar are almost as fast as integers. The questions - Is there any canonical source that supports the "all-PK-must-be-surrogates" approach ? - Has Codd's relational model been superceded by a newer relational model ?

    Read the article

  • How system services are started in 12.10?

    - by Salem
    One thing that always confused me in Ubuntu was how system services are started. I know that Ubuntu uses Upstart and supports SysV, but which one is used to start the services? This matters when you want a "manual" start for a service. For example, on my system i have files for the following services either in /etc/init.d/<service> (Upstart) and /etc/init/<service>.conf (SysV): acpid, mysql, networking, qemu-kvm, ufw, libvirt-bin So if i want to disable MySQL execution at startup, i must use the Upstart way or the SysV way to disable it? Also, how can i tell which of those is really used to start a generic service? Edit The really doubt here is not how disable/enable services using SysV/Upstart. What really confuses me is that some services seem to be defined (and enabled) in SysV and Upstart at the same time. Is there any precedence between them (like if mysql is enabled in both launch it using SysV)? Or can it be the case that one tool uses the other in background?

    Read the article

  • Joining a company to get experience vs. going alone [closed]

    - by daniels
    My goal is to build a successful web startup, say the next Digg or Twitter, and I am in doubt regarding what is the best route to follow as a programmer. I see basically two options: Get an internship/job with an established online company, so that I could get a mentor and learn from more experienced programmers, learn their processes, methodology and so on. I could do this for 1-2 years, and then quit to start working on my own stuff. Start working on my own projects right away, starting with small ones and moving up gradually. This would give me more control on the things I would be working with, but I would lack contact with more experienced people, so I would need to figure basic things on my own. Doing both is not an option in my opinion, cause I would need to put a lot of effort/time into each if I was to learn/improve as a programmer. So is one route definitely better than the other? Is there a third one I am not considering? Background: I already work by myself developing content-based websites and doing SEO, and I am decent at it so money is not a problem. Last year I started learning to program, first by myself and now I enrolled in a CS degree on a good university.

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >