Search Results

Search found 8172 results on 327 pages for 'vector graphics'.

Page 71/327 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • How to verify system using right GPU, after system reset [duplicate]

    - by Antoros
    This question already has an answer here: Is my mobile AMD card being used? 2 answers OS: Windows 8 CPU: Intel® Core™ i7 Processor 3635QM GPU 1 : Intel HD Graphics 4000 GPU 2 : AMD Radeon™ HD 8870M other info: System Spects Problem: im unsure that CCC is using AMD card instead of Intel's, i have encountered several issues since updating to 8.1 and i don't know what to do What happened: Installed 8.1 patch first day After 1 minute of use, BBSOD, windows never loaded again System restore wouldnt recognize 8.0 restore points i did a system reset to windows 8 since the laptop was only 3 weeks old System Broke, it did restore to factory BUT kept the registry almost intact, i had to install almost everything again, since the factory drivers where working with the updated one's registry and several problems CCC Broke too <- What i've already done Installing new drivers on top of old ones didnt work, so i used AMD uninstaller first Uninstalled and Re-installed Intel's HD Graphics Driver Tried to install mobile center, but AMD told me that it wasnt compatible (even if thats the only driver that they provide via their page as seen Here) Tried to use Auto-Detect, couldnt install driver because card was disabled because it didnt have the drivers... (see what they did here?) Had to use a workaround with Samsung Update, the driver didnt appear as download so had to use search and downloaded the driver manually. Now the graphic card appears on device manager and catalyst but as 8800 series (not exact model), and cant check the card with neither dxdiag/GPU-z/HWMonitor when right-clicking on CCC only Intel card appears launching a game and using as "high performance" would speed it up a little but i cant be sure How to verify its working properly? HWMonitor wont show AMD card even when set to high performance Latest GPU-Z wont work because a problem with Intel's, and legacy ones wont either what can I do now? I don't even know if I fixed my problem or not, and i also want to to use Adobe Premier with it, and its locked (the option to run it with the amd card not intels) Edit: now it seems to work, but cant change the setting for adobe Premiere and other programs that i Need to

    Read the article

  • How do I force my std::map to deallocate memory used?

    - by monkeyking
    I'm using a std::map, and I can't seem to free the memory back to the OS. It looks like, int main(){ aMap m; while(keepGoing){ while(fillUpMap){ //populate m } doWhatIwantWithMap(m); m.clear(); //flush some buffered values into map for next iteration flushIntoMap(m); } } Each (fillUpmap) allocates around 1gig, so I'm very much interested in getting this back to my system before it eats up all my memory. Ive experienced the same with std::vector, but there I could force it to free by doing a swap with an empty std::vector. This doesn't work with map. When I use valgrind it says that all memory is freed, so its not a problem with a leak, since everything is cleared up nicely after a run.

    Read the article

  • Viewing Postscript (or PDF) on OS X: Aliasing issues

    - by mankoff
    I am generating postscript graphics and am trying to find a balance between non-aliasing and over-aliasing. If I use the raw ghostscript viewer gs on the Postscript, it looks good. The text appears anti-aliased, but the image remains nice and blocky. Unfortunately, gs has no real user interface and loses all of the nice things that Preview.app has. I could install gv, but the dependency bloat is huge! It requires all of gnome. And even that isn't a great viewer compared to Preview.app or Skim.app. Here is an image viewed with gs: From a user-interaction and Mac-ish perspective, Preview.app (or Skim.app is a much nicer program to use. They have the option to turn on or off aliasing, but neither option looks very good. Which aliasing on, the image is blurry. When it is off, the graphic matches what is seen from gs, but there are two issues. Minor issue: the font is ugly. Uglier than with gs. Major issue: Every PDF is un-aliased, making it hard to read regular PDFs full of text. So, in summary: Is there a way to manually generate the PDF from the PS that overcomes these issues? Is there a way to find a middle ground of alias/unalias with Preview.app? Is there another app that displays with quality like gs, but has a decent UI like Skim.app or Preview.app Is there a way to have Preview.app turn off aliasing for only one file (containing graphics) but leave it enabled in general so that text PDFs are still readable?

    Read the article

  • Stack Overflow Accessing Large Vector

    - by cam
    I'm getting a stack overflow on the first iteration of this for loop for (int q = 0; q < SIZEN; q++) { cout<<nList[q]<<" "; } nList is a vector of type int with 376 items. The size of nList depends on a constant defined in the program. The program works for every value up to 376, then after 376 it stops working. Any thoughts?

    Read the article

  • Windows 7 x64 support for Intel GMA 3650 (or GMA 3600)

    - by Loom
    I recently purchased an Intel D2700 MUD motherboard and I cannot find drivers for the Win7 x64 integrated graphics (Intel GMA 3650 aka PowerVR sgs545). The accompanying CD contains Win7 x32 version only. When I run it I got an error: This computer does not meet the minimum requirements for installing the software. I tried to use online utility Intel Driver Update Utility Graphics. I used Chrome, Firefox, Internet Explorer without success. First, UAC prompt appear, and then endlessly spinning progress-bar with text "Analyzing computer...". The text in UAC prompt is: Program file name: System Requirements Lab Verified publisher: Husdawg, LLC I downloaded this utility (intel_srldetect_4.5.5.0) and started it from my hard disk. I got an error: A network error occured while attempting to read from the file: C:\Users\Loom\Downloads\SystemRequirementsLab_intel_4.5.5.0.msi Standard VGA driver works for this video card but without hardware acceleration: Hardware acceleration is either disabled or not supported by your video card driver, which could slow game performance. Make sure you have the latest video card driver installed and that hardware acceleration is turned on. Where I can get appropriate driver?

    Read the article

  • Can't modify XNA Vector components

    - by Matt H
    I have a class called Sprite, and ballSprite is an instance of that class. Sprite has a Vector2 property called Position. I'm trying to increment the Vector's X component like so: ballSprite.Position.X++; but it causes this error: Cannot modify the return value of 'WindowsGame1.Sprite.Position' because it is not a variable Is it not possible to set components like this? The tooltip for the X and Y fields says "Get or set ..." so I can't see why this isn't working.

    Read the article

  • Normal vector of a face loaded from an FBX model during collision?

    - by Corey Ogburn
    I'm loading a simple 6 sided cube from a UV-mapped FBX model and I'm using a BoundingBox to test for collisions. Once I determine there's a collision, I want to use the normal vector of the collided surface to correct the movement of whatever collided with the cube. I suppose this is a two-part question: 1) How can I determine which face of the cube was collided with in a collision? 2) How can I get the normal vector of that surface?

    Read the article

  • How to better create stacked bar graphs with multiple variables from ggplot2?

    - by deoksu
    I often have to make stacked barplots to compare variables, and because I do all my stats in R, I prefer to do all my graphics in R with ggplot2. I would like to learn how to do two things: First, I would like to be able to add proper percentage tick marks for each variable rather than tick marks by count. Counts would be confusing, which is why I take out the axis labels completely. Second, there must be a simpler way to reorganize my data to make this happen. It seems like the sort of thing I should be able to do natively in ggplot2 with plyR, but the documentation for plyR is not very clear (and I have read both the ggplot2 book and the online plyR documentation. My best graph looks like this, the code to create it follows: the R code I use to get it is the following: library(epicalc) ### recode the variables to factors ### recode(c(int_newcoun, int_newneigh, int_neweur, int_newusa, int_neweco, int_newit, int_newen, int_newsp, int_newhr, int_newlit, int_newent, int_newrel, int_newhth, int_bapo, int_wopo, int_eupo, int_educ), c(1,2,3,4,5,6,7,8,9, NA), c('Very Interested','Somewhat Interested','Not Very Interested','Not At All interested',NA,NA,NA,NA,NA,NA)) ### Combine recoded variables to a common vector Interest1<-c(int_newcoun, int_newneigh, int_neweur, int_newusa, int_neweco, int_newit, int_newen, int_newsp, int_newhr, int_newlit, int_newent, int_newrel, int_newhth, int_bapo, int_wopo, int_eupo, int_educ) ### Create a second vector to label the first vector by original variable ### a1<-rep("News about Bangladesh", length(int_newcoun)) a2<-rep("Neighboring Countries", length(int_newneigh)) [...] a17<-rep("Education", length(int_educ)) Interest2<-c(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17) ### Create a Weighting vector of the proper length ### Interest.weight<-rep(weight, 17) ### Make and save a new data frame from the three vectors ### Interest.df<-cbind(Interest1, Interest2, Interest.weight) Interest.df<-as.data.frame(Interest.df) write.csv(Interest.df, 'C:\\Documents and Settings\\[name]\\Desktop\\Sweave\\InterestBangladesh.csv') ### Sort the factor levels to display properly ### Interest.df$Interest1<-relevel(Interest$Interest1, ref='Not Very Interested') Interest.df$Interest1<-relevel(Interest$Interest1, ref='Somewhat Interested') Interest.df$Interest1<-relevel(Interest$Interest1, ref='Very Interested') Interest.df$Interest2<-relevel(Interest$Interest2, ref='News about Bangladesh') Interest.df$Interest2<-relevel(Interest$Interest2, ref='Education') [...] Interest.df$Interest2<-relevel(Interest$Interest2, ref='European Politics') detach(Interest) attach(Interest) ### Finally create the graph in ggplot2 ### library(ggplot2) p<-ggplot(Interest, aes(Interest2, ..count..)) p<-p+geom_bar((aes(weight=Interest.weight, fill=Interest1))) p<-p+coord_flip() p<-p+scale_y_continuous("", breaks=NA) p<-p+scale_fill_manual(value = rev(brewer.pal(5, "Purples"))) p update_labels(p, list(fill='', x='', y='')) I'd very much appreciate any tips, tricks or hints. Thanks.

    Read the article

  • Should I switch my graphics mode in the BIOS to avoid using Bumblebee?

    - by Fawkes5
    I have just purchased a Acer 3830TG, the timeline X series. To my surprise I found out that there is no first-party support for nvidia optimus for linux. Bumblebee works great, but the battery life from the graphics card always running is not so great. I don't use linux for games so i don't really need the graphics card on, I have Windows for that. In my bios, I have the ability to change my graphics mode from switchable to integrated. If I do this, reinstall ubuntu, what will happen? Will my nvidia card just turn off? Will everything work properly, as if i'm not running an optimus laptop? Is this recommended as opposed to dealing with bumblebee? What is the best thing I could do?

    Read the article

  • Why am I stuck at 640x480 on an Optimus hybrid-graphics system?

    - by exilada
    I have Intel HD 3000 graphics card onboard and nvidia 520 mx optimus techolonogy card. I was try to install Nvidia driver but it was failure. Now I cant use anything. Have one resolution 640x480 every media disconnected and I cant connect $ xrandr Screen 0: minimum 320 x 200, current 640 x 480, maximum 8192 x 8192 LVDS1 connected 640x480+0+0 (normal left inverted right x axis y axis) 344mm x 194mm 640x480 59.9* VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 disconnected (normal left inverted right x axis y axis) DP1 disconnected (normal left inverted right x axis y axis) lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09 ) glxinfo | grep vendor server glx vendor string: SGI client glx vendor string: Mesa Project and SGI OpenGL vendor string: Tungsten Graphics, Inc Blockquote something wrong here I guess I try to some solutions but didnt work even it cant nvidia-xconfig file after these By the way system get eror sometimes about xorg Sorry for my English thaks for help.

    Read the article

  • How To Draw More Precise Lines using Core Graphics and CALayer

    - by user308444
    Hello I am having a hard time making this UI element look the way I want (see screenshot). Notice the image on the right--how the line width and darkness looks inconsistent compared to the image on the left (which happens to be a screen grab from safari) where the border width is more consistent. How does apple make their lines so perfect? I'm using a CALayer and the Core Graphics API to draw the image on the right. Is it possible to draw such perfect lines with the standard apis?

    Read the article

  • Qt Graphics Scene mouse event propagation

    - by Olorin
    hello i'm learning qt and i'm doing the folowing to add some widgets to a graphics scene void MainWindow::addWidgets(QList<QWidget *> &list, int code) { if(code == CODE_INFO) { QWidget *layoutWidget = new QWidget(); QVBoxLayout *layout = new QVBoxLayout(); foreach(QWidget *w, list) { layout->addWidget(w); this->connect(((ProductInfo*)w), SIGNAL(productClicked()), this, SLOT(getProductDetails())); } layoutWidget->setLayout(layout); this->scene->addWidget(layoutWidget); } } my ProductInfo class processes mouse release and emits a signal void ProductInfo::mouseReleaseEvent(QMouseEvent *e) { QWidget::mouseReleaseEvent(e); emit productClicked(); } the problem is after adding the widgets to the scene they no longer get the mouse release event and don't emit productClicked signal but if i add them to the main window(not to the scene) they work as expected. What am i doing wrong?

    Read the article

  • Flex: Why is line obscured by canvas' background

    - by mauvo
    I want MyCanvas to draw lines on itself, but they seem to be drawn behind the background. What's going on and how should I do this? Main.mxml <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" xmlns:my="*"> <my:MyCanvas width="300" height="300" id="myCanvas"></my:MyCanvas> <mx:Button label="Draw" click="myCanvas.Draw();"></mx:Button> </mx:Application> MyCanvas.as package { import mx.containers.Canvas; public class MyCanvas extends Canvas { public function MyCanvas() { this.setStyle("backgroundColor", "white"); } public function Draw():void { graphics.lineStyle(1); graphics.moveTo( -10, -10); graphics.lineTo(width + 10, height + 10); } } } Thanks.

    Read the article

  • Actionscript: Why is drawRoundRectComplex() not documented?

    - by Chunk1978
    in studying actionscript 3's graphics class, i've come across the undocumented drawRoundRectComplex() method. it's a variant of drawRoundRect() but with 8 parameters, the final four being the diameter of each corner (x, y, width, height, top left, top right, bottom left, bottom right). //example var sp:Sprite = new Sprite(); sp.graphics.lineStyle(1, 0x000000); sp.graphics.drawRoundRectComplex(0, 0, 100, 50, 10, 20, 0, 10); addChild(sp); this seems to be a pretty useful method, so i'm just curious if anyone knows of any reasons why adobe chose not to document it?

    Read the article

  • Sex appear of computer graphics: movie like UI systems

    - by anon
    It's well know that 1) the way computers actually work 2) the way computers are protrayed in movies are not the same. In particular (2) looks much much cooler than (1). Where can I learn more about making flashy, superficially useful but deepdown useless fancy graphics UIs like that? It's almost in the realm of "hollywood special effects" -- like fire/smoke/fire, but I don't want natural phenomenon; I want user interfaces. Concrete question: where can I learn about creating flashy, cool looking (though not necessairly useful) user interfaces? [Perferably in OpenGL]

    Read the article

  • Which C++ graphics library should I use?

    - by mspoerr
    Hello, I found the following graphics libraries, but I am not sure which one I should use. Maybe there are some more... Graphviz (http://www.graphviz.org/) Boost Graph Library (http://www.boost.org/doc/libs/1_42_0/libs/graph/doc/index.html) Lemon (http://lemon.cs.elte.hu/trac/lemon) igraph (http://igraph.sourceforge.net/introduction.html) What it should do: draw a undirected network map come as header only or static lib for Windows the output format should be user editable Graphviz is the only one I tried so far, but I found no static lib for it, I failed to build it by my own and the documentation could be better. Therefore I looked around and found these other three libs. I would be glad to get some recommendations which lib to choose. Thanks, /mspoerr

    Read the article

  • Loop colours from variables for graphics.py [Python 3.2]

    - by user1056548
    I am creating a graphics program that draws 100 x 100 squares next to each other depending on the user-specified grid size. The user also inputs 4 colours for the squares to be coloured (e.g. if they enter red,green,blue,yellow the squares will be coloured in that order, repeating the colours). Is it possible to loop the colours from the variables the user has given? Here is what I have so far: def main(): print ("Please enter four comma seperated colours e.g.: 'red,green,blue,yellow'\n\ Allowed colours are: red, green, blue, yellow and cyan") col1, col2, col3, col4 = input("Enter your four colours: ").split(',') win = GraphWin ("Squares", 500, 500) colours = [col1, col2, col3, col4] drawSquare (win, col1, col2, col3, col4, colours) win.getMouse() win.close() def drawSquare(win, col1, col2, col3, col4, colours): for i in range (4): for j in range (len(colours)): colour = colours[j] x = 50 + (i * 50) circle = Circle (Point (x,50), 20) circle.setFill(colour) circle.draw(win) I think I should be using a list in some way, but can't work out exactly how to do it. Can anybody help?

    Read the article

  • Sex appeal of computer graphics: movie like UI systems [closed]

    - by anon
    It's well know that 1) the way computers actually work 2) the way computers are protrayed in movies are not the same. In particular (2) looks much much cooler than (1). Where can I learn more about making flashy, superficially useful but deepdown useless fancy graphics UIs like that? It's almost in the realm of "hollywood special effects" -- like fire/smoke/fire, but I don't want natural phenomenon; I want user interfaces. Concrete question: where can I learn about creating flashy, cool looking (though not necessairly useful) user interfaces? [Perferably in OpenGL]

    Read the article

  • Flickering when repainting a JPanel inside a JScrollPAne

    - by pR0Ps
    I'm having a problem with repainting a JPanel inside a JScrollPane. Basically, I'm just trying to 'wrap' my existing EditPanel (it originally extended JPanel) into a JScrollPane. It seems that the JPanel updates too often (mass flickering). How would I stop this from happening? I tried using the setIgnoreRepaint() but it didn't seem to do anything. Will this current implementation work or would I need to create another inner class to fine-tune the JPanel I'm using to display graphics? Skeleton code: public class MyProgram extends JFrame{ public MyProgram(){ super(); add(new EditPanel()); pack(); } private class EditPanel extends JScrollPane{ private JPanel graphicsPanel; public EditPanel(){ graphicsPanel = new JPanel(); } public void paintComponent(Graphics g){ graphicsPanel.revalidate(); //update the scrollpane to current panel size repaint(); Graphics g2 = graphicsPanel.getGraphics(); g2.drawImage(imageToDraw, 0, 0, null); } } }

    Read the article

  • How to generate graphics into photoshop using actionscript?

    - by understack
    I've a text file with content like this: id, pixelsize, color, text block1, 200x60, black, Header block2, 200x180, white, Body block2, 200x60, black, Footer Now using actionscript, I want to generate a psd file which would generate a 3 vertical block graphics (like this) after parsing the given file. All the blocks are placed vertically on top of each other. Convert this psd file into PDF automatically using the script. Automate this whole process without opening photoshop. Is it possible? Please help. Thanks.

    Read the article

  • Android Graphics Memory Limits

    - by Gordon
    I am creating an android game using opengl and a cocos2d port (http://code.google.com/p/cocos2d-android-1). I am targeting a wide range of devices and want to ensure that it performs well. I only test on a nexus one and am hoping to get some input from people with experience on slower devices. Currently the game uses two 1024x1024 textures as well as two 256x256 textures. Is this within the limits of most devices? Anyone have any rule of thumb or experience with graphics memory limits in these cases? If gfx memory is exceeded does it page to normal memory?

    Read the article

  • Programmatically creating vector arrows in KML

    - by mettadore
    Does anyone have any practical examples of programmatically drawing icons as vectors in KML? Specifically, I have data with a magnitude and an azimuth at given coordinates, and I would like to have icons (or another graphical element) generated based on these values. Some thoughts on how I might approach it: Image directory (a brute force way): Make an image director of 360 different image files (probably batch rotate a single image) each pointing in a cooresponding azimuth. I've seen things like "Excel to KML," but am looking for code that I can use within a program, rather than a web utility. Issue: Arrow does not contain magnitude context, so that would have to be a label. I'd rather dynamically lengthen the arrow. Line creation in KML: Perhaps create a formula that creates a line with the origin at the coordinate points, with the length of the line proportional to the magnitute, and angled according to azimuth. There would then be two more lines, perhaps 30 degrees or so extending from the end of the previous line to make the arrow head. Issues: Not a separate image icon, so not sure how it would work in KML. Also not sure how easy it would be to generate this algorithm. Separate image generation: Perhaps create a PHP file that uses imagemagick or something similar to dynamically generate a .png file in a similar method to the above, and then link to the icon using the URI "domain.tld/imagegen.php?magnitude=magvalue&azimuth=azmvalue". Issue: Still have the problem of actually writing the algorithm for image generation. So, the question: has anyone else come up with solutions for programmatic vector (rather than merely arrow) generation?

    Read the article

  • AndEngine VS Android's Canvas VS OpenGLES - For rendering a 2D indoor vector map

    - by Orchestrator
    This is a big issue for me I'm trying to figure out for a long time already. I'm working on an application that should include a 2D vector indoor map in it. The map will be drawn out from an .svg file that will specify all the data of the lines, curved lines (path) and rectangles that should be drawn. My main requirement from the map are Support touch events to detect where exactly the finger is touching. Great image quality especially when considering the drawings of curved and diagonal lines (anti-aliasing) Optional but very nice to have - Built in ability to zoom, pan and rotate. So far I tried AndEngine and Android's canvas. With AndEngine I had troubles with implementing anti-aliasing for rendering smooth diagonal lines or drawing curved lines, and as far as I understand, this is not an easy thing to implement in AndEngine. Though I have to mention that AndEngine's ability to zoom in and pan with the camera instead of modifying the objects on the screen was really nice to have. I also had some little experience with the built in Android's Canvas, mainly with viewing simple bitmaps, but I'm not sure if it supports all of these things, and especially if it would provide smooth results. Last but no least, there's the option of just plain OpenGLES 1 or 2, that as far as I understand, with enough work should be able to support all the features I require. However it seems like something that would be hard to implement. And I've never programmed in OpenGL or anything like it, but I'm willing very much to learn. To sum it all up, I need a platform that would provide me with the ability to do the 3 things I mentioned before, but also very important - To allow me to implement this feature as fast as possible. Any kind of answer or suggestion would be very much welcomed as I'm very eager to solve this problem! Thanks!

    Read the article

  • Converting Single dimensional vector or array to two dimension in Matlab

    - by pac
    Well I do not know if I used the exact term. I tried to find an answer on the net. Here is what i need: I have a vector a = 1 4 7 2 5 8 3 6 9 If I do a(4) the value is 4. So it is reading first column top to buttom then continuing to next .... I don't know why. However, What I need is to call it using two indices. As row and column: a(3,2)= 4 or even better if i can call it in the following way: a{3}(2)=4 What is this process really called (want to learn) and how to perform in matlab. I thought of a loop. Is there a built in function Thanks a lot Check this: a = 18 18 16 18 18 18 16 0 0 0 16 16 18 0 18 16 0 18 18 16 18 0 18 18 0 16 0 0 0 18 18 0 18 18 16 0 16 0 18 18 >> a(4) ans = 18 >> a(5) ans = 18 >> a(10) ans = 18 I tried reshape. it is reshaping not converting into 2 indeces

    Read the article

  • Caching vector addition over changing collections

    - by DRMacIver
    I have the following setup: I have a largish number of uuids (currently about 10k but expected to grow unboundedly - they're user IDs) and a function f : id - sparse vector with 32-bit integer values (no need to worry about precision). The function is reasonably expensive (not outrageously so, but probably on the order of a few 100ms for a given id). The dimension of the sparse vectors should be assumed to be infinite, as new dimensions can appear over time, but in practice is unlikely to ever exceed about 20k (and individual results of f are unlikely to have more than a few hundred non-zero values). I want to support the following operations efficiently: add a new ID to the collection invalidate an existing ID retrieve sum f(id) in O(changes since last retrieval) i.e. I want to cache the sum of the vectors in a way that's reasonable to do incrementally. One option would be to support a remove ID operation and treat invalidation as a remove followed by an add. The problem with this is that it requires us to keep track of all the old values of f, which is expensive in space. I potentially need to use many instances of this sort of cached structure, so I would like to avoid that. The likely usage pattern is that new IDs are added at a fairly continuous rate and are frequently invalidated at first. Ids which have been invalidated recently are much more likely to be invalidated again than ones which have remained valid for a long time, but in principle an old Id can still be invalidated. Ideally I don't want to do this in memory (or at least I want a way that lets me save the result to disk efficiently), so an idea which lets me piggyback off an existing DB implementation of some sort would be especially appreciated.

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >