Search Results

Search found 29502 results on 1181 pages for 'line segment'.

Page 730/1181 | < Previous Page | 726 727 728 729 730 731 732 733 734 735 736 737  | Next Page >

  • How can I malloc an array of struct to do the following using file operation?

    - by RHN
    How can malloc an array of struct to do the following using file operation? the file is .txt input in the file is like: 10 22 3.3 33 4.4 I want to read the first line from the file and then I want to malloc an array of input structures equal to the number of lines to be read in from the file. Then I want to read in the data from the file and into the array of structures malloc. Later on I want to store the size of the array into the input variable size. return an array.After this I want to create another function that print out the data in the input variable in the same form like input file and suppose a function call clean_data will free the malloc memory at the end. I have tried somthing like: struct input { int a; float b,c; } struct input* readData(char *filename,int *size); int main() { return 0; } struct input* readData(char *filename,int *size) { char filename[] = "input.txt"; FILE *fp = fopen(filename, "r"); int num; while(!feof(fp)) { fscanf(fp,"%f", &num); struct input *arr = (struct input*)malloc(sizeof(struct input)); } }

    Read the article

  • I don't understand how TDD helps me get a good design if I need a design to start testing it

    - by Michael Stum
    I'm trying to wrap my head around TDD, specifically the development part. I've looked at some books, but the ones I found mainly tackle the testing part - the History of NUnit, why testing is good, Red/Green/Refactor and how to create a String Calculator. Good stuff, but that's "just" Unit Testing, not TDD. Specifically, I don't understand how TDD helps me get a good design if I need a Design to start testing it. To illustrate, imagine these 3 requirements: A catalog needs to have a list of products The catalog should remember which products a user viewed Users should be able to search for a product At this points, many books pull a magic rabbit out of a hat and just dive into "Testing the ProductService", but they don't explain how they came to the conclusion that there is a ProductService in the first place. That is the "Development" part in TDD that I'm trying to understand. There needs to be an existing design, but stuff outside of entity-services (that is: There is a Product, so there should be a ProductService) is nowhere to be found (e.g., the second requirement requires me to have some concept of a User, but where would I put the functionality to remind? And is Search a feature of the ProductService or a separate SearchService? How would I know which I should choose?) According to SOLID, I would need a UserService, but if I design a system without TDD, I might end up with a whole bunch of Single-Method Services. Isn't TDD intended to make me discover my design in the first place? I'm a .net developer, but Java resources would also work. I feel that there doesn't seem to be a real sample application or book that deals with a real line of business application. Can someone provide a clear example that illustrates the process of creating a design using TDD?

    Read the article

  • Booting off a ZFS root in 14.04

    - by RJVB
    I've been running a Debian derivative (LMDE) on a ZFS root for half a year now. It was created by cloning a regular ext4-based install with all the necessary packages onto a ZFS pool, chrooting into that pool and recreating a grub menu and bootloader. The system uses an ext-3 dedicated /boot partition. I would like to do the same with Ubuntu 14.04, but have encountered several obstacles. There is no Trusty zfs-grub package The default grub package doesn't have ZFS support built in. I found a small bug in the build system responsible for that (report with patch created) and built my own grub packages. The built-in ZFS support is dysfunctional, it does not add the proper arguments to the kernel command line I thus installed the ZoL grub package I also use on my LMDE system, which does give me a correct grub.cfg However, even with that correct grub.cfg, the boot process apparently doesn't retrieve the bootfs parameter from the ZFS pool; instead the variable that's supposed to receive the value remains empty. As a result, initrd tries to load the default pool ("rpool"), which fails of course. I can however import the pool by hand, and complete the process by hand. If memory serves me well, I also had to disable apparmor, to avoid the boot process from blocking after importing the pool. Am I overlooking something? Just for comparison, I installed the Ubuntu 3.13 kernel on my LMDE system, and that works just fine (i.e. the identical kernel and grub binaries allow successful booting without glitches on LMDE but not on Ubuntu).

    Read the article

  • Stylecop 4.7.39.0 has been released

    - by TATWORTH
    Stylecop  4.7.38.0 has been released at http://stylecop.codeplex.com/releases/view/79972The release notes follow:Allow case sensitivity in the deprecated words and recognised words listStyleing fixes.Fix for documentation spelling checks inside nested xml nodes.Look for CustomDictionary.xml files in the folder of the cs file.Update the TabIndex in the spelling tab.Updating default deprecated words and their alternatives.Add support for specifying dictionary folders in the settings.StyleCop file. Like :Rename StyleCopViolationError to StyleCopHighlightingError and all associated types.Fix the Bulb Item for spelling mistakes to replace matching words correctly.Fix the spelling parser for strings beginning with $$THREADING FIX: Make StyleCop execute analysis in proces and not create 2 threads. Use Countdown Event when we move to .NET 4.Use the naming service for the Culture specified for the project. Pass the actual violation through to ReSharper.Ensure Registry access code works for VS2008 addins.Rollback Registry changes to ensure VS2008 plugin loads correctly.Adding support for preferred alternative words for spelling. Adding deprecated word support into Settings.StyleCop file. Spelling is only checked if Office 2010 is installed. Allow editing of deprecated words and their alternatives in the Settings editor.Adding new resource stringsAdding BulbItem and Quick fixes for spelling errors.Moving StringExtensions to common area.Styling fixes.Report all spelling errors found on a line.Start of 4.7.39.0 dev.

    Read the article

  • Simple C: atof giving wrong value [migrated]

    - by Doc
    I have a program that reads input from a singe line(string obviously) and organizes it into arrays. The problem I have is that at one point the program reads two different values and returns the first one twice. Initially I thought the program was reading the same value twice but when I tested it turned out that it got the correct one but is inputting the wrong one. for example Input: 2 0.90 0.75 0.7 0.65 sorry to snip it (while(fgets (string[test], sizeof(string[test]),ifp)) pch = strtok_r(NULL, " ", &prog); tem3 = atoi(pch); while (loop<tem3) { pch=strtok_r(NULL," ",&prog); venseatfloat[test][loop][DISCOUNT][OCCUPIED]=(float)atof(pch); printf("%f is discount\t",venseatfloat[test][loop][DISCOUNT][OCCUPIED]); pch=strtok_r(NULL, " ", &prog); strcpy(temp, pch); venseatfloat[test][loop][REGULAR][OCCUPIED]=(float)atof(pch); printf("%s is the string but %.3f is regular\n", temp ,venseatfloat[test][loop][DISCOUNT][OCCUPIED]); loop++; } output: >0.900000 is discount 0.75 is the string but 0.900 is regular >0.700000 is discount 0.65 is the string but 0.700 is regular What is going on?

    Read the article

  • CUDA 4.1 Update

    - by N0xus
    I'm currently working on porting a particle system to update on the GPU via the use of CUDA. With CUDA, I've already passed over the required data I need to the GPU and allocated and copied the date via the host. When I build the project, it all runs fine, but when I run it, the project says I need to allocate my h_position pointer. This pointer is my host pointer and is meant to hold the data. I know I need to pass in the current particle position to the required cudaMemcpy call and they are currently stored in a list with a for loop being created and interated for each particle calling the following line of code: m_particleList[i].positionY = m_particleList[i].positionY - (m_particleList[i].velocity * frameTime * 0.001f); My current host side cuda code looks like this: float* h_position; // Your host pointer. This holds the data (I assume it's already filled with the data.) float* d_position; // Your device pointer, we will allocate and fill this float* d_velocity; float* d_time; int threads_per_block = 128; // You should play with this value int blocks = m_maxParticles/threads_per_block + ( (m_maxParticles%threads_per_block)?1:0 ); const int N = 10; size_t size = N * sizeof(float); cudaMalloc( (void**)&d_position, m_maxParticles * sizeof(float) ); cudaMemcpy( d_position, h_position, m_maxParticles * sizeof(float), cudaMemcpyHostToDevice); Both of which were / can be found inside my UpdateParticle() method. I had originally thought it would be a simple case of changing the h_position variable in the cudaMemcpy to m_particleList[i] but then I get the following error: no suitable conversion function from "ParticleSystemClass::ParticleType" to "const void *" exists I've probably messed up somewhere, but could someone please help fix the issues I'm facing. Everything else seems to running fine, it's just when I try to run the program that certain things hit the fan.

    Read the article

  • Unable to start Ubuntu 12.04. The system is running in low-graphics

    - by kaleidoscpicsoul
    I am a newbie to Ubuntu. I installed Ubuntu 12.04 using a USB Stick and it was running all fine for a few weeks and this error popped up. According to one of my friend, the best way was to re-install Ubuntu. Being from a non unix background i thought the same too and after the second install it happened again, but only this time was much quicker, in 3 days. I don't want to re-install Ubuntu every time this happens. I am a complete newbie to Linux which means that i am really bad at using terminal. I know there are other people who fixed this issue using this very same forum, but unfortunately the answers provided are too complex for me to understand. Please let me know how to do this. Things I want to let you know: I would need help step by step if that is alright with you. After i get the error i get the options and i click exit to console login I get the following message in a black screen (which i think is a command line sort of thing): * Stopping save kernel messages [OK] apache2: Could not reliably determine the server's fully qualified domain name,using 127.0.1.1 for ServerName [OK] * Starting web server apache2 and a blinking cursor. So basically it looks like a dead end for my non unixy eye. And one final thing is before this issue had happened i had tried configuring Python to Apache2. For that i had uninstalled and installed LAMP server several time and edited the configuration files too. I don't know if this really is a concern, but I don't know.. I have a USB with Ubuntu 12.04 in it so i can install it anytime. (But i want to know what the issue is rather than running away) . I migrated to Ubuntu from Windows and i have no plans to go back. I think that's from my side. Please let me know if there are any questions.

    Read the article

  • Configuring an Engenius 3500

    - by dsiddens
    The title speaks to only half of the issue: the other half are the settings in Ubuntu and the sequences therein. The computer in this issue does receive internet with the external antenna jack at the back being fed with a simple magnetic base antenna designed for putting on the roof of an automobile. However, that signal is weak and the Engenius with an external antenna (Rootenna ~15db gain) and ehternet wire will supply a stronger, faster signal. I've set the Engenius to the desired source and entered the correct WEP password. The lights on the Engenius indicate that it's connected to the access point. At the Ubuntu side of this I've worked to no avail changing settings with "Edit Connections" to the point I'm Ask(ing)Ubuntu for help. I have and have RTFM for Engenius 3500 There is an embarrassing side note to this issue: At one time I had the Engenius working! It seems that I can't recall the settings and sequences I used way back when. And I may as well confess to not knowing the Command Line. I'm a GUI guy. Thank you for your time, Doug

    Read the article

  • Hyperion Training from Oracle University

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} There is a great portfolio of the latest version of Hyperion Training from Oracle University, available at a discount for Oracle Partners, for example see these sets of courses: Disclosure Management Financial Close Management (2) Financial Data Quality Management (3) Hyperion Financial Management (14) Integrated Operational Planning Planning (13) Profitability Management (2) Public Sector Planning and Budgeting (3) Smart View (9) Strategic Finance Data Relationship Management (3) Crystal Ball (4)

    Read the article

  • Partition tool with console UI (as in server installation)?

    - by lepe
    Back in 2006, Ray (3DLover) posted the same question in: http://ubuntuforums.org/showthread.php?t=309680 but none of the answers were really useful. Now with a little help from AskUbuntu community, I would like to repeat his question again to see if this time it can be answered correctly. So this is the question (and what I wish too): I'm looking for a UI tool for managing partitions in a console. I have installed Ubuntu Server, so I don't have X Windows at all. fdisk and sfdisk are entirely command line. parted is slightly better, but it's not really a UI. cfdisk has somewhat of a UI, but it only works on one disk at a time, and there's no advanced options like configuring LVM or RAID. Just partitioning. I love the partition tool that is available during the OS install procedure. You can partition, configure RAID's and LMV sets. It can format the partitions with several different file systems, it can set labels, mount options and it can insert your volumes into your fstab. Is this tool available as a stand-alone program? I can't find it anywhere. I think it's called parted_server, but I can't find much information about where to get it. In the past, I have run the Ubuntu install procedure just to use the partition manager that comes with it. (canceling the install after making my partition edits) Anyone help me on this? Thanks -Ray Thanks in advance.

    Read the article

  • Using foldr to append two lists together (Haskell)

    - by Luke Murphy
    I have been given the following question as part of a college assignment. Due to the module being very short, we are using only a subset of Haskell, without any of the syntactic sugar or idiomatic shortcuts....I must write: append xs ys : The list formed by joining the lists xs and ys, in that order append (5:8:3:[]) (4:7:[]) => 5:8:3:4:7:[] I understand the concept of how foldr works, but I am only starting off in Functional programming. I managed to write the following working solution (hidden for the benefit of others in my class...) : However, I just can't for the life of me, explain what the hell is going on!? I wrote it by just fiddling around in the interpreter, for example, the following line : foldr (\x -> \y -> x:y) [] (2:3:4:[]) which returned [2:3:4] , which led me to try, foldr (\x -> \y -> x:y) (2:3:4:[]) (5:6:7:[]) which returned [5,6,7,2,3,4] so I worked it out from there. I came to the correct solution through guess work and a bit of luck... I am working from the following definition of foldr: foldr = \f -> \s -> \xs -> if null xs then s else f (head xs) (foldr f s (tail xs) ) Can someone baby step me through my correct solution? I can't seem to get it....I already have scoured the web, and also read a bunch of SE threads, such as How foldr works

    Read the article

  • How do I automatically start Clamz with AMZ files for Amazon MP3 downloads?

    - by Takkat
    Chromium can open downloaded files with the default application (e.g. PDF in Evince). In my setup a downloaded .AMZ (for Amazon MP3) always opened with Gedit. However I would like to have all downloaded .amz files to autromatically open with Clamz, a command line tool for downloading that works like a charm. As in Nautilus my .amz files were associated to open with Gedit too I thought it was a good idea to add a clamz.desktop file in ~/.local/share/applications (according to this answer) [Desktop Entry] Encoding=UTF-8 Name=Clamz Comment=Open AMZ files for Amazon MP3 download Exec=/usr/bin/clamz %u Terminal=True Type=Application Icon= Categories=Application; StartupNotify=true MimeType=audio/x-amzxml; NoDisplay=true This lets me choose Clamz as default application in Nautilus. But when opening an .amz file in Nautilus it still does not open with Clamz as expected but is treated as an executable text file instead (note that the executable bit is not set!). Is there any other way to make Chromium or Nautilus always open an .amz file with Clamz? Did I miss to change setting in another place?

    Read the article

  • Javascript Module pattern with DOM ready

    - by dego89
    I am writing a JS Module pattern to test out code and help me understand the pattern, using a JS Fiddle. What I can't figure out is why my "private methods" on line 25 and 26, when referenced via DOM ready, have a value of undefined. JSFiddle Code Sample: var obj = { key: "value" }; var Module = (function () { var innerVar = "5"; console.log("obj var in Module:"); console.log(obj); function privateFunction() { console.log("privateFunction() called."); innerFunction(); function innerFunction() { console.log("inner function of (private function) called."); } } function _numTwo() { console.log("_numTwo() function called."); } return { test: privateFunction, numTwo: _numTwo } }(obj)); $(document).ready(function () { console.log("$ Dom Ready"); console.log("Module in Dom Ready: "); console.log(Module.test()); });

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part III)

    The last post ended with us just getting started on stumbling into text template file customization, a task that required a Visual Studio extension (Tangible T4 Editor) to even have a chance at completing.  Despite the benefits of the Tangible T4 Editor, I still had a hard time putting together a solid text template that would be easy to explain.  This is mostly due to the way the files allow you to mix code (encapsulated in <# #>) with straight-up text to generate.  It is effective to be sure, but not very readable.  Nevertheless, I will try and explain what was accomplished in my custom tt file, though the details of which are not really the point of this article (my way of saying dont criticize my crappy code, and certainly dont use it in any somewhat real application.  You may become dumber just by looking at this code.  You have been warned really the footnote I should put at the end of all of my blog posts). To begin with, there were two basic requirements that I needed the code generator to satisfy:  Reading one to many entity framework files, and using the entities that were found to write one to many class files.  Thankfully, using the Entity Object Generator as a starting point gave us an example on how to do exactly that by using the MetadataLoader and EntityFrameworkTemplateFileManager you include references to these items and use them like so: // Instantiate an entity framework file reader and file writer MetadataLoader loader = new MetadataLoader(this); EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this); // Load the entity model metadata workspace MetadataWorkspace metadataWorkspace = null; bool allMetadataLoaded =loader.TryLoadAllMetadata("MFL.tt", out metadataWorkspace); EdmItemCollection ItemCollection = (EdmItemCollection)metadataWorkspace.GetItemCollection(DataSpace.CSpace); // Create an IO class to contain the 'get' methods for all entities in the model fileManager.StartNewFile("MFL.IO.gen.cs"); Next, we want to be able to loop through all of the entities found in the model, and then each property for each entity so we can generate classes and methods for each.  The code for that is blissfully simple: // Iterate through each entity in the model foreach (EntityType entity in ItemCollection.GetItems<EntityType>().OrderBy(e => e.Name)) {     // Iterate through each primitive property of the entity     foreach (EdmProperty edmProperty in entity.Properties.Where(p => p.TypeUsage.EdmType is PrimitiveType && p.DeclaringType == entity))     {         // TODO:  Create properties     }     // Iterate through each relationship of the entity     foreach (NavigationProperty navProperty in entity.NavigationProperties.Where(np => np.DeclaringType == entity))     {         // TODO:  Create associations     } } There really isnt anything more advanced than that going on in the text template the only thing I had to blunder through was realizing that if you want the generator to interpret a line of code (such as our iterations above), you need to enclose the code in <# and #> while if you want the generator to interpret the VALUE of code, such as putting the entity name into the class name, you need to enclose the code in <#= and #> like so: public partial class <#=entity.Name#> To make a long story short, I did a lot of repetition of the above to come up with a text template that generates a class for each entity based on its properties, and a set of IO methods for each entity based on its relationships.  The two work together to provide lazy-loading for hierarchical data (such getting Team.Players) so it should be pretty intuitive to use on a front-end.  This text template is available here you can tweak the inputFiles array to load one or many different edmx models and generate the basic xml IO and class files, though it will probably only work correctly in the simplest of cases, like our MFL model described in the previous post.  Additionally, there is no validation, logging or error handling which is something I want to handle later by stumbling through the enterprise library 5.0. The code that gets generated isnt anything special, though using the LINQ to XML feature was something very new and exciting for me I had only worked with XML in the past using the DOM or XML Reader objects along with XPath, and the LINQ to XML model is just so much more elegant and supposedly efficient (something to test later).  For example, the following code was generated to create a Player object for each Player node in the XML:         return from element in GetXmlData(_PlayerDataFile).Descendants("Player")             select new Player             {                 Id = int.Parse(element.Attribute("Id").Value)                 ,ParentName = element.Parent.Name.LocalName                 ,ParentId = long.Parse(element.Parent.Attribute("Id").Value)                 ,Name = element.Attribute("Name").Value                 ,PositionId = int.Parse(element.Attribute("PositionId").Value)             }; It is all done in one line of code, no looping needed.  Even though GetXmlData loads the entire xml file just like the old XML DOM approach would have, it is supposed to be much less resource intensive.  I will definitely put that to the test after we develop a user interface for getting at this data.  Speaking of the data where IS the data?  Weve put together a pretty model and a bunch of code around it, but we dont have any data to speak of.  We can certainly drop to our favorite XML editor and crank out some data, but if it doesnt totally match our model, it will not load correctly.  To help with this, Ive built in a method to generate xml at any given layer in the hierarchy.  So for us to get the closest possible thing to real data, wed need to invoke MFL.IO.GenerateTeamXML and save the results to file.  Doing so should get us something that looks like this: <Team Id="0" Name="0">   <Player Id="0" Name="0" PositionId="0">     <Statistic Id="0" PassYards="0" RushYards="0" Year="0" />   </Player> </Team> Sadly, it is missing the Positions node (havent thought of a way to generate lookup xml yet) and the data itself isnt quite realistic (well, as realistic as MFL data can be anyway).  Lets manually remedy that for now to give us a decent starter set of data.  Note that this is TWO xml files Lookups.xml and Teams.xml: <Lookups Id=0>   <Position Id="0" Name="Quarterback"/>   <Position Id="1" Name="Runningback"/> </Lookups> <Teams Id=0>   <Team Id="0" Name="Chicago">     <Player Id="0" Name="QB Bears" PositionId="0">       <Statistic Id="0" PassYards="4000" RushYards="120" Year="2008" />       <Statistic Id="1" PassYards="4200" RushYards="180" Year="2009" />     </Player>     <Player Id="1" Name="RB Bears" PositionId="1">       <Statistic Id="2" PassYards="0" RushYards="800" Year="2007" />       <Statistic Id="3" PassYards="0" RushYards="1200" Year="2008" />       <Statistic Id="4" PassYards="3" RushYards="1450" Year="2009" />     </Player>   </Team> </Teams> Ok, so we have some data, we have a way to read/write that data and we have a friendly way of representing that data.  Now, what remains is the part that I have been looking forward to the most: present the data to the user and give them the ability to add/update/delete, and doing so in a way that is very intuitive (easy) from a development standpoint.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • A Brand-new Automated Testing Tool is the Result of Telerik and ArtOfTest Merger

    Im sure youve already heard the great news about Telerik expansion and the new Telerik Automated Testing Tools division. I am excited to share what we worked on and produced for the last couple of months. New Release The latest Telerik release that went live this week added a completely new tool to Teleriks automated testing product line. The new QA Edition is tailored for QA Professionals. The QA Edition is a standalone tool that allows QAs to freely create, execute and maintain their tests without having to install Visual Studio. If you are a developer and you want something much faster and lightweight than VS, then the Standalone tool is worth trying. New IDE The QA Edition is a WPF application with interface built on top of the latest and greatest RadControls for WPF. This allowed us to configure and build intuitive and easy-to-use UI. Additionally, the rich ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How do I configure an Intel HD Graphics 4000?

    - by derabbink
    First off, please note that last night I already posted this question to a launchpad mailing list, so this could be considered a cross post. However, I think this is a better place to ask the same question The question: How can I configure my Ubuntu 12.04, with upgraded kernel (3.6), to use the Intel HD Graphics 4000 adapter? (Intel HD 4000 is the standard of 3rd gen Intel Core i7 (Ivy Bridge) graphics adapter) Some output: $ glxinfo name of display: :0 X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 154 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 12 Current serial number in output stream: 12 $ cat /etc/X11/xorg.conf this is probably the farthest from what it should be Section "Screen" Identifier "Default Screen" DefaultDepth 24 EndSection Section "Module" Load "glx" EndSection $ lspci I only listed the line I think are relevant. If you want more info in order to help me, please comment :) 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 16:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Whistler XT [AMD Radeon HD 6700M Series] 16:00.1 Audio device: Advanced Micro Devices [AMD] nee ATI Turks HDMI Audio [Radeon HD 6000 Series]

    Read the article

  • Sound not working with Ubuntu 12.10 clean install

    - by ZooRocket
    Did a clean install of Ubuntu 12.10 from 12.04 and the sound is not working now. In 12.04 it worked out of the box. I ran hwinfo --sound > hal.1: read hal dataprocess 4222: arguments to dbus_move_error() were incorrect, assertion "(dest) == NULL || !dbus_error_is_set ((dest))" failed in file ../../dbus/dbus-errors.c line 282. This is normally a bug in some application using the D-Bus library. libhal.c 3483 : Error unsubscribing to signals, error=The name org.freedesktop.Hal was not provided by any .service files 10: PCI 1b.0: 0403 Audio device [Created at pci.318] Unique ID: u1Nb.ekgK5auW5RA SysFS ID: /devices/pci0000:00/0000:00:1b.0 SysFS BusID: 0000:00:1b.0 Hardware Class: sound Model: "Intel 82801G (ICH7 Family) High Definition Audio Controller" Vendor: pci 0x8086 "Intel Corporation" Device: pci 0x27d8 "82801G (ICH7 Family) High Definition Audio Controller" SubVendor: pci 0x1028 "Dell" SubDevice: pci 0x01de Revision: 0x01 Memory Range: 0xfdffc000-0xfdffffff (rw,non-prefetchable) IRQ: 11 (no events) Module Alias: "pci:v00008086d000027D8sv00001028sd000001DEbc04sc03i00" Driver Info #0: Driver Status: snd_hda_intel is active Driver Activation Cmd: "modprobe snd_hda_intel" Config Status: cfg=new, avail=yes, need=no, active=unknown Not sure how to proceed to fix this. Has also worked prior to this version.

    Read the article

  • Certain grid lines not rendering as expected

    - by row1
    I am drawing a simple quad (a triangle strip with 4 vertices) as the floor and then drawing an 8x8 grid over top (a collection of vertex pairs for a line list). The vertical grid lines work fine (apart from being very aliased), but some of the horizontal lines do not get rendered. The grid renders fine if I do not draw the quad. foreach (EffectPass pass in _Effect.CurrentTechnique.Passes) { pass.Apply(); CurrentGraphicsDevice.SetVertexBuffer(_VertexFloorBuffer); _Engine.CurrentGraphicsDevice.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 2); //Some of the horizontal lines seems to disappear if we draw the above quad. CurrentGraphicsDevice.SetVertexBuffer(_VertexGridBuffer); CurrentGraphicsDevice.DrawPrimitives(PrimitiveType.LineList, 0, _VertexGridBuffer.VertexCount / 2); } What could be causing these lines to not be rendered? Update: I added the below code after I draw my quad and grid and it started working. But I am not sure why that works as I thought this code was to draw the WPF controls elementRenderer.Render(); spriteBatch.Begin(); spriteBatch.Draw(elementRenderer.Texture, Vector2.Zero, Color.White); spriteBatch.End();

    Read the article

  • GeoTools Demo Embedded in an Application Framework via Maven

    - by Geertjan
    GeoTools 8.4 was very recently released, according to its active blog, and to celebrate here's a starting point for working with GeoTools on the NetBeans Platform: The sources of the above are below, as a Maven project, so this project can be used in any IDE or command line: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.3/tutorials/geospatial/geotools/MyGeospatialSystem Though quite dated, the GeoTools NetBeans Quick Start is very helpful, especially since it used Maven too, but not the NetBeans Platform, unlike the above sample. From the point of view of NetBeans Platform developers, the GeoTools JMapPane class is very useful, providing the integration point between GeoTools and the rest of the NetBeans Platform application. Being integrated into the NetBeans Platform means that a host of standard features are now available to the GeoTools features, e.g., print functionality, which only requires a runtime dependency on the NetBeans Print API, together with the "print.printable" client property put into constructor of the TopComponent: By the way, I've spent some time now and again being confused about the difference between GeoTools and GeoToolkit. Here's an interesting starting point to beginning to understand the differences and history between them. Soon I'd like to have an example similar for the above for GeoToolkit.

    Read the article

  • Help with design structure choice: Using classes or library of functions

    - by roverred
    So I have GUI Class that will call another class called ImageProcessor that contains a bunch functions that will perform image processing algorithms like edgeDetection, gaussianblur, contourfinding, contour map generations, etc. The GUI passes an image to ImageProcessor, which performs one of those algorithm on it and it returns the image back to the GUI to display. So essentially ImageProcessor is a library of independent image processing functions right now. It is called in the GUI like so Image image = ImageProcessor.EdgeDetection(oldImage); Some of the algorithms procedures require many functions, and some can be done in a single function or even one line. All these functions for the algorithms jam packed into ImageProcessor can be pretty messy, and ImageProcessor doesn't sound it should be a library. So I was thinking about making every algorithm be a class with a shared interface say IAlgorithm. Then I pass the IAlgorithm interface from the GUI to the ImageProcessor. public interface IAlgorithm{ public Image Process(); } public class ImageProcessor{ public Image Process(IAlgorithm TheAlgorithm){ return IAlgorithm.Process(); } } Calling in the GUI like so Image image = ImageProcessor.Process(new EdgeDetection(oldImage)); I think it makes sense in an object point of view, but the problem is I'll end up with some classes that are just one function. What do you think is a better design, or are they both crap and you have a much better idea? Thanks!

    Read the article

  • Is text-only mode a saving or a problem for battery savings?

    - by Robottinosino
    A friend is flying to the US from Europe and asked me a very thought-provoking question, which I am not remotely able to answer with substance so I am asking it here: How to absolutely maximise battery life on an Ubuntu (laptop) install? do not rush to mark this as duplicate, there is an important point here: does -GNOME- help or worsen battery life? Let me provide some context: The only task he needs to perform is: edit text files in Vim. He is unsure whether running GNOME will drain his battery life more or actually save him some battery life given the smarts of GNOME's power management features like "switch this peripheral to -power save- after X minutes..." (GNOME might just be a configuration front-end for settings that are governed by command-line utils for all I know?) He could perfectly well boot the system in text-only mode and use the automatic 6 virtual consoles for his needs, if that's a saving at all over running tmux (I think so because of all the smart buffering/history/etc the latter does by default?) Exactly how would you advise him to run his laptop during his flight? What I told him already: power off WiFi in the BIOS, not from the "GUI" power off Bluetooth switch off the courtesy light and use low monitor brightness play music off of his phone, not mp3blaster do not use his tiny portable mouse (and do not attach any other USB gimmicks like "screen light", etc) stop development services he will not be using, especially apache2, tomcat, dovecot, postgresql, etc. Potentially: - switch off his cron jobs? (he does an rsync + tar + 7za of his "work in progress" every so often) I think the above is standard stuff one could get off StackExchange, and with many duplicates... the core of this question is, I think: __ will running Ubuntu in text-only mode be a saving in terms of battery life or a problem? why? (provide some technical arguments) __ I think it will be a saving but I am also scared about "other things" detecting and enabling advanced chipset power management features only when some services are started.. and fear these "services" may be off in text-only mode?

    Read the article

  • display driver problem

    - by Robert
    I just recently installed Ubuntu 12.10 on my laptop and am keeping windows 7 as a back up for when I get stuck like now..... After I installed Ubuntu I got on line and was reading about some of the things to do after Installing for the first time. One of them was looking for additional drivers witch I did and found two display drivers. Both drivers were from Ubuntu. However after install and a reboot I can't see any of my menu bars.....no app bar on the side.....no bar at the top of the display...nothing. I did have a file on the desktop that I was able to open. And I just learned that ctrl+alt+T gets me a form of command prompt. this worked but I don't know how to use it. How do I get my menu bars back?? is there a way to uninstall the drivers? I am really new to Ubuntu and like it so far but am having problems getting over the learning curve.

    Read the article

  • Default mount options on auto-mounted NTFS partitions (how to add `noexec` and `fmask=0111`?)

    - by jetxee
    I use auto-mounting of external USB devices, and it works as expected, except that NTFS partitions are mounted with executability flag on. For example: /dev/sdb1 on /media/Elements type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096,default_permissions) All normal files are -rwxrwxrwx on this partition. I am not happy with the xs. I know I can have it mounted the way I want if I pass the fmask=0111 option. Now I use Lucid, and suppose it uses some new auto-mounting mechanism (gvfs-mount?), but I don't really know how the default mounting options can be changed now. Gconf settings in /system/storage/default_options/ntfs/mount_options have no effect. So, how do I make fmask=0111 the default automounting option for all NTFS partitions? (I'd be grateful also if someone explains how the current automounting mechanism works, how to configure it, and if the default mounting options are hard-coded, what I have to recompile to change them). I know that I can put a line in the /etc/fstab and/or mount manually, but this is not the solution I want, because 1) I don't want to edit /etc/fstab for each and every external drive I use, 2) fstab records appear in the Places pane of Nautilus, even if the drives are not present. The questions is how to change the defaults.

    Read the article

  • Introducing RedPatch

    - by timhill
    The Ksplice team is happy to announce the public availability of one of our git repositories, RedPatch. RedPatch contains the source for all of the changes Red Hat makes to their kernel, one commit per fix and we've published it on oss.oracle.com/git. With RedPatch, you can access the broken-out patches using git, browse them online via gitweb, and freely redistribute the source under the terms of the GPL. This is the same policy we provide for Oracle Linux and the Unbreakable Enterprise Kernel (UEK). Users can freely access the source, view the commit logs and easily identify the changes that are relevant to their environments. To understand why we've created this project we'll need a little history. In early 2011, Red Hat changed how they released their kernel source, going from a tarball that had individual patch files to shipping the kernel source as one giant tarball with a single patch for all Red Hat-introduced changes. For most people who work in the kernel this is merely an inconvenience; driver developers and other out-of-kernel module developers can see the end result to make sure their module still performs as expected. For Ksplice, we build individual updates for each change and rely on source patches that are broken-out, not a giant tarball. Otherwise, we wouldn’t be able to take the right patches to create individual updates for each fix, and to skip over the noise — like a change that speeds up bootup — which is unnecessary for an already-running system. We’ve been taking the monolithic Red Hat patch tarball and breaking it into smaller commits internally ever since they introduced this change. At Oracle, we feel everyone in the Linux community can benefit from the work we already do to get our jobs done, so now we’re sharing these broken-out patches publicly. In addition to RedPatch, the complete source code for Oracle Linux and the Oracle Unbreakable Enterprise Kernel (UEK) is available from both ULN and our public yum server, including all security errata. Check out RedPatch and subscribe to [email protected] for discussion about the project. Also, drop us a line and let us know how you're using RedPatch!

    Read the article

  • ACT On' OVCA for Cloud Providers Program Launch Webcast: June 12, 2014 - 9am UKT / 10am CET / 11am EET

    - by Cinzia Mascanzoni
    Normal 0 false false false EN-US X-NONE X-NONE We invite you to join the OVCA for Cloud Providers ‘ACT On' program launch at 11am BST / 12noon CET on June 12. · More and more customers realize the value of shifting to a Converged IT Infrastructure, this is why IDC expects this market to grow 40% annually for the next 2 years. · The Oracle Virtual Compute Appliance (OVCA) with attached ZFS storage is the perfect answer to this market trend. By providing rapid application and cloud deployment, OVCA allows customers to cut capital expenditures by up to 50% and deploy key applications up to 7x faster. · For Partners, OVCA supports their journey to consolidation, virtualization and cloud, and allows them to sell higher value services to their customers. The objective of this webcast is to share with you the OVCA value proposition, help you identify the best target partners, and provide you with the Enablement and Demand Generation content and resources. To register and for further details click here /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

< Previous Page | 726 727 728 729 730 731 732 733 734 735 736 737  | Next Page >