Search Results

Search found 950 results on 38 pages for 'floating'.

Page 34/38 | < Previous Page | 30 31 32 33 34 35 36 37 38  | Next Page >

  • How can I write a function template for all types with a particular type trait?

    - by TC
    Consider the following example: struct Scanner { template <typename T> T get(); }; template <> string Scanner::get() { return string("string"); } template <> int Scanner::get() { return 10; } int main() { Scanner scanner; string s = scanner.get<string>(); int i = scanner.get<int>(); } The Scanner class is used to extract tokens from some source. The above code works fine, but fails when I try to get other integral types like a char or an unsigned int. The code to read these types is exactly the same as the code to read an int. I could just duplicate the code for all other integral types I'd like to read, but I'd rather define one function template for all integral types. I've tried the following: struct Scanner { template <typename T> typename enable_if<boost::is_integral<T>, T>::type get(); }; Which works like a charm, but I am unsure how to get Scanner::get<string>() to function again. So, how can I write code so that I can do scanner.get<string>() and scanner.get<any integral type>() and have a single definition to read all integral types? Update: bonus question: What if I want to accept more than one range of classes based on some traits? For example: how should I approach this problem if I want to have three get functions that accept (i) integral types (ii) floating point types (iii) strings, respectively.

    Read the article

  • SVG text - total length changes depending on zoom

    - by skco
    In SVG (for web-browsers), if i add a <text>-element and add some text to it the total rendered width of the text string will change depending on the scale of the text. Lets say i add "mmmmmmmmmmmmmmmmmmmmmmmmmmA" as text, then i want to draw a vertical line(or other exactly positioned element) intersecting the very last character. Works fine but if i zoom out the text will become shorter or longer and the line will not intersect the text in the right place anymore. The error can be as much as +/- 5 characters width which is unacceptable. The error is also unpredictable, 150% and 160% zoom can add 3 characters length while 155% is 2 charlengths shorter. My zoom is implemented as a scale-transform on the root element of my canvas which is a <g>. I have tried to multiply the font-size with 1000x and scale down equally on the zoom-transform and vice versa in case it was a floating point error but the result is the same. I found the textLength-attribute[1] which is supposed to adjust the total length so the text always end where i choose but it only works in Webkit. Firefox and Opera seems to not care at all about this value (haven't tried in IE9 yet). Is there any way to render text exactly positioned without resorting to homemade filling of font-outlines? [1] http://www.w3.org/TR/SVG11/text.html#TextElementTextLengthAttribute Update Snippet of the structure i'm using <svg> <g transform="scale(1)"> <!--This is the root, i'm changing the scale of this element to zoom --> <g transform="scale(0.014)"> <!--This is a wrapper for multi-line text, scaling, other grouping etc --> <text font-size="1000" textLength="40000">ABDCDEFGHIJKLMNOPQRSTUVXYZÅÄÖabcdefghijklmnopqrstxyzåäö1234567890</text> </g> </g>

    Read the article

  • Zend Framework: How do I modify/format the form view generated with Zend_Dojo_Form elements

    - by pinardelrio
    I have created a form: <?php class Application_Form_Issue extends Zend_Dojo_Form { public function init() { $this->setName('issue'); $this->setMethod('post'); $id = new Zend_Form_Element_Hidden('id'); $id->addFilter('Int'); $date_recvd = new Zend_Dojo_Form_Element_DateTextBox('date_recvd'); $date_recvd->setLabel('Date Received') //->setRequired(true) /*->addValidator('NotEmpty'); */; More Form elements ... To view this form my view script is: <?php echo $this->form; ?> This all works just fine, with fully functional dojo form elements (datepicker, timepicker, etc) and successfully saving the data. However, now, I want to format the form that is generated with css. Such as grouping some elements and floating left or right, making some input text fields wider/narrower, etc. How? I realize I can modify the view script but it seems like that defeats the purpose of using Zend_Dojo_Form or Zend_Form. Is that a correct assumption?

    Read the article

  • can I store an id value in array of float type ?

    - by srikanth rongali
    I used, for(id value in values) to get the value from an NSArray. Now I want to store it in 2 dimensional float array[][]. When I try to assign the values to array it is giving error:incompatible types in assignment. I tried to cast the value but I got error: pointer value used where a floating point value was expected. I need to store the values in an 2 dimensional array . How can I make it ? Thank You. @implementation fromFileRead1 NSString *fileNameString; int numberOfEnemies, numberOfValues; -(id)init { if( (self = [super init]) ) { NSString *path = @"/Users/sridhar/Desktop/Projects/exampleOnFile2/enemyDetals.txt"; NSString *contentsOfFile = [[NSString alloc] initWithContentsOfFile:path]; NSArray *lines = [contentsOfFile componentsSeparatedByString:@"\n"]; numberOfEnemies = [lines count]; NSLog(@"The number of Lines: %d", numberOfEnemies); for (id line in lines) { NSLog(@"Line %@", line ); NSString *string1 = line; NSArray *split1 = [string1 componentsSeparatedByString:@","]; numberOfValues = [split1 count]; NSLog(@"The number of values in Row: %d", numberOfValues); for (id value in split1) { NSLog(@"value %@", value); float value1; value1 = [split1 objectAtIndex:2]); NSLog(@"VAlue of Value1 at index 2: %f", value1 ); } } } return self; } @end In enemyDetal.txt I have 1,3,3 2,3,2.8 10,2,1.6

    Read the article

  • Web UI element to represent two different micro-views of data in the same spot?

    - by Chris McCall
    I've been tasked with laying out a portion of a screen for a customer care (call center) app that serves as sort of a header/summary block at the top of the screen. Here's what it looks like: The important part is in the red box. That little tooltip is the biz's vision for how to represent both the numeric SiteId and the textual Site Name all in the same piece of screen real estate. I asked, and the business thinks the Name is more important than the ID, but lists the Id by default, because the Name can't be truncated in the display, and there's only so much horizontal room to put the data. So they go with the Id, because it's fewer characters, and then they have the user mouse-over the Id to display the name (presumably because the tooltip can be of unlimited width and since it's floating over the rest of the screen, the full name will always be displayed. So, here's my question: Is there some better UI metaphor that I don't know about that could get this job done, while meeting the following constraints?: Does not require the mouse (uses a keyboard shortcut to do the "reveal") Allows the user to copy and paste the name Will not truncate the name Provides for the display of both the ID and name in the same spot Works with IE7

    Read the article

  • style ul with width to accommodate items

    - by Tallmaris
    Sorry I could not find a similar answer on SO. I have the following markup (generated by jquery ui autocomplete but this is not the issue here) <ul style="z-index: 1; display: block; border: thin solid red; width: 200px;"> <li> <a> <div style="font-size: 0.85em;"> <span style="float: right; padding-left: 10px; color: gray;"> United Kingdom </span> <span style="">text text text text text</span> </div> </a> </li> </ul> As you can see in this fiddle: http://jsfiddle.net/fVr8P/2/ the text wraps because the width is limited and the country span is "floating". What I would like would be for the width to enlarge to accommodate the full length, but if I put width: auto; it will expand to 100%. Background The ul is of course coming from jquery ui autocomplete. I am styling the results a bit using the autocomplete.html extension. Problem is that everything is working ok in firefox an chrome because the autocomplete is set to the correct width on creation. in IE this does not happen (width is too small) so the text wraps. I am hping to come to a simple css only solution, without fiddling around in jQuery.

    Read the article

  • Embedded Development Board

    - by ALF3130
    I'm new to the embedded development world and am looking to get my very first board. After some research, I realize that there aren't many choices with FPUs. This is important in my project as I'm going to be doing quite a bit of floating point computations. I found the Mini2440 which seems to run on the ARM920T core. This particular unit is perfect for my needs (decent price, all the right I/O ports, and a touch screen to boot) but it seems that it doesn't have an FPU. I don't know how big of a penalty I'd be paying for FP emulation, so I'm unsure of whether to pull the trigger on this one. That said: Can someone please confirm whether this product (Mini2440) has an FPU or not? My project will do image capture and analysis. Does anyone have any experience with running things like OpenMP on such platforms? Please suggest any other similar boards in the = $200 price range that have an FPU. This world is new to me. Any other advice or things I should be aware of is much appreciated.

    Read the article

  • Preserving the order of annotations

    - by Ragunath Jawahar
    When obtaining the list of fields using getFields() and getDeclaredFields(), the order of the fields in a Java class is undefined. I need this order in my validation library. Since the order is not preserved (though some claim that the order is preserved in JDK 6 and above but is not guaranteed across VMs). I cannot speculate on this because the order of annotations across fields is absolutely essential for the library. One way to get around this is to have an order or an index attribute in my Annotation. What worries me is that the code could become a bit cumbersome for maintaining in the following case. If the use wants to insert a new annotated field then, he might have to renumber all the other annotations in the class. I could have the order or index as a floating point number - float or double but , it wouldn't look good to have order such as 1, 1.5, 2, etc., What would be an elegant solution for this problem? Here is a example code so that you can get an idea about the problem: @Required @TextRule (minLength = 6, message = "You need at least 6 characters.") private EditText usernameEditText; @Password private EditText passwordEditText; @ConfirmPassword private EditText confirmPasswordEditText; @Email private EditText emailEditText;

    Read the article

  • Calculations on the iteration count in for loop in Ruby 1.8.7

    - by user1805035
    I was playing around with Ruby and Latex to create a color coding set. I'm more than a novice with C++, but haven't looked at Ruby until now. So, still learning a lot of coding. I have the following block of code below. When attempting to run this, band1 = 1e+02. I've tried band1 = (BigDecimal(i) * 100).to_f thinking maybe there was some odd floating point issue. This is just me trying anything though as an integer multiplied by an integer should create an integer, if I'm still thinking correctly. I've tried a variety of other things as well (things that I can do in C++, but this ain't C++), but to no avail. (1..9).each do |i| #Band 1 (0..9).each do |j| #Band 2 (0..11).each do |k| #Band 3 #Band 3 Start #these are the colors of the resistor bands b1 = $c_band12[i] b2 = $c_band12[j] b3 = $c_band3[k] b4 = "Gold" oms = ((i*100) + (j*10)) * $mult[k] band1 = i*100 band2 = j band3 = $mult[k] end end end Not sure what I'm missing. Should I be using .each_with_index through these iterations? I've tried this: (1..9).each_with_index {|i, indexi| #Band 1 (0..9).each_with_index {|j, indexj| #Band 2 (0..11).each_with_index {|k, indexk| #Band 3 #Band 3 Start #these are the colors of the resistor bands b1 = $c_band12[i] b2 = $c_band12[j] b3 = $c_band3[k] b4 = "Gold" oms = ((i*100) + (j*10)) * $mult[k] band1 = indexk * 100 and I get the same answer. I can't see why 1*100 should equate to such a large number? Thank you, AT

    Read the article

  • Calculating with a variable outside of its bounds in C

    - by aquanar
    If I make a calculation with a variable where an intermediate part of the calculation goes higher then the bounds of that variable type, is there any hazard that some platforms may not like? This is an example of what I'm asking: int a, b; a=30000; b=(a*32000)/32767; I have compiled this, and it does give the correct answer of 29297 (well, within truncating error, anyway). But the part that worries me is that 30,000*32,000 = 960,000,000, which is a 30-bit number, and thus cannot be stored in a 16-bit int. The end result is well within the bounds of an int, but I was expecting that whatever working part of memory would have the same size allocated as the largest source variables did, so an overflow error would occur. This is just a small example to show my problem, I am trying to avoid using floating points by making the fraction be a fraction of the max amount able to be stored in that variable (in this case, a signed integer, so 32767 on the positive side), because the embedded system I'm using I believe does not have an FPU. So how do most processors handle calculations out of the bounds of the source and destination variables?

    Read the article

  • Windows Phone 7 Prototype 001: Speech Recognition on WP7

    At some point in the future it will be awesome when you can just tell your computer what to do and it does it - without typing to help those of us with a blistering 11 WPM hunk and peck technique. Siri, a mobile digital assistant using speech recognition was voted best tech at SXSW. I dont know about that one. Although, I'm sure it will get better when Apple rebuilds it and  bundles on iPhone 5. So how would you do that on WP7? There have been some videos floating around showing Bing with some voice control so obviously the phone has speech recognition. So what options are there: System.Speech? Not included in WP7/SL Nuance software like Siri? No WP7/SL version yet. Invoking the SAPI dlls on the phone? No automation factory in WP7 SL. Web services using System.Speech and mic on the phone? YES! The last one was my least favorite but that works for now. I built a quick sample app to show how to do text-to-speech and speech recognition on WP7.   @eklimczak will not be happy with the developer designed UI. In this sample there is web service with provides access to the system.speech APIs in .NET. Basically its just passing around byte arrays. On the phone its using the XNA audio frameworks to play the text-to-speech stream and to record using the microphone. The code is pretty simple and you can download from the link at the end of this post. The only things to note are adjusting the WCF config to handle larger byte uploads and the Microphone API is a little weird with that 1 second buffer. It would be nice if you could just to mic.start and mic.end which would return an array of bytes instead of managing your own stream inside the buffer ready callback. Couple of downsides to this approach: Recoding from the phone has some static. Could be my code or the my mic is bad / not calibrated right. Having to make web service calls instead of local access is not ideal (Microsoft, please add an API for the SAPI dlls) Although in the context of an app like Siri its not so bad since you need to do web service lookups to get data back Speech recognition quality really depends on either a) a limited grammar set like that pizza grammar in the sample or b) training the recognizer. For the latter it would be annoying to have users train the system. Using the System.Speech stuff youd have to have a profile for each user. So until Microsoft adds some speech client APIs on the phone or Nuance releases a wp7 product, this is a decent workaround. In the future Id like to build something similar to Siri. I shall call it Iris in homage. Im a big fan of mobile speech apps because frankly its just not safe to Google while driving. Since some of my designer co-workers have been posting UI sketches for WP7, Id like to start posting some code prototypes for things I try out on the phone. That will probably last 2 weeks, but for the moment I have like 10 posts in the queue. Sample Code 100% guaranteed to work on my emulatorDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Converting Openfire IM datetime values in SQL Server to / from VARCHAR(15) and DATETIME data types

    - by Brian Biales
    A client is using Openfire IM for their users, and would like some custom queries to audit user conversations (which are stored by Openfire in tables in the SQL Server database). Because Openfire supports multiple database servers and multiple platforms, the designers chose to store all date/time stamps in the database as 15 character strings, which get converted to Java Date objects in their code (Openfire is written in Java).  I did some digging around, and, so I don't forget and in case someone else will find this useful, I will put the simple algorithms here for converting back and forth between SQL DATETIME and the Java string representation. The Java string representation is the number of milliseconds since 1/1/1970.  SQL Server's DATETIME is actually represented as a float, the value being the number of days since 1/1/1900, the portion after the decimal point representing the hours/minutes/seconds/milliseconds... as a fractional part of a day.  Try this and you will see this is true:     SELECT CAST(0 AS DATETIME) and you will see it returns the date 1/1/1900. The difference in days between SQL Server's 0 date of 1/1/1900 and the Java representation's 0 date of 1/1/1970 is found easily using the following SQL:   SELECT DATEDIFF(D, '1900-01-01', '1970-01-01') which returns 25567.  There are 25567 days between these dates. So to convert from the Java string to SQL Server's date time, we need to convert the number of milliseconds to a floating point representation of the number of days since 1/1/1970, then add the 25567 to change this to the number of days since 1/1/1900.  To convert to days, you need to divide the number by 1000 ms/s, then by  60 seconds/minute, then by 60 minutes/hour, then by 24 hours/day.  Or simply divide by 1000*60*60*24, or 86400000.   So, to summarize, we need to cast this string as a float, divide by 86400000 milliseconds/day, then add 25567 days, and cast the resulting value to a DateTime.  Here is an example:   DECLARE @tmp as VARCHAR(15)   SET @tmp = '1268231722123'   SELECT @tmp as JavaTime, CAST((CAST(@tmp AS FLOAT) / 86400000) + 25567 AS DATETIME) as SQLTime   To convert from SQL datetime back to the Java time format is not quite as simple, I found, because floats of that size do not convert nicely to strings, they end up in scientific notation using the CONVERT function or CAST function.  But I found a couple ways around that problem. You can convert a date to the number of  seconds since 1/1/1970 very easily using the DATEDIFF function, as this value fits in an Int.  If you don't need to worry about the milliseconds, simply cast this integer as a string, and then concatenate '000' at the end, essentially multiplying this number by 1000, and making it milliseconds since 1/1/1970.  If, however, you do care about the milliseconds, you will need to use DATEPART to get the milliseconds part of the date, cast this integer to a string, and then pad zeros on the left to make sure this is three digits, and concatenate these three digits to the number of seconds string above.  And finally, I discovered by casting to DECIMAL(15,0) then to VARCHAR(15), I avoid the scientific notation issue.  So here are all my examples, pick the one you like best... First, here is the simple approach if you don't care about the milliseconds:   DECLARE @tmp as VARCHAR(15)   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SET @tmp = CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15)) + '000'   SELECT @tmp as JavaTime, @dt as SQLTime If you want to keep the milliseconds:   DECLARE @tmp as VARCHAR(15)   DECLARE @dt as DATETIME   DECLARE @ms as int   SET @dt = '2010-03-10 14:35:22.123'   SET @ms as DATEPART(ms, @dt)   SET @tmp = CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15))           + RIGHT('000' + CAST(@ms AS VARCHAR(3)), 3)   SELECT @tmp as JavaTime, @dt as SQLTime Or, in one fell swoop:   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SELECT @dt as SQLTime     , CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15))           + RIGHT('000' + CAST( DATEPART(ms, @dt) AS VARCHAR(3)), 3) as JavaTime   And finally, a way to simply reverse the math used converting from Java date to SQL date. Note the parenthesis - watch out for operator precedence, you want to subtract, then multiply:   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SELECT @dt as SQLTime     , CAST(CAST((CAST(@dt as Float) - 25567.0) * 86400000.0 as DECIMAL(15,0)) as VARCHAR(15)) as JavaTime Interestingly, I found that converting to SQL Date time can lose some accuracy, when I converted the time above to Java time then converted  that back to DateTime, the number of milliseconds is 120, not 123.  As I am not interested in the milliseconds, this is ok for me.  But you may want to look into using DateTime2 in SQL Server 2008 for more accuracy.

    Read the article

  • Silverlight Cream for May 15, 2010 -- #862

    - by Dave Campbell
    In this Issue: Victor Gaudioso, Antoni Dol(-2-), Brian Genisio, Shawn Wildermuth, Mike Snow, Phil Middlemiss, Pete Brown, Kirupa, Dan Wahlin, Glenn Block, Jeff Prosise, Anoop Madhusudanan, and Adam Kinney. Shoutouts: Victor Gaudioso would like you to Checkout my Interview with Microsoft’s Murray Gordon at MIX 10 Pete Brown announced: Connected Show Podcast #29 With … Me! From SilverlightCream.com: New Silverlight Video Tutorial: How to Create Fast Forward for the MediaElement Victor Gaudioso's latest video tutorial is on creating the ability to fast-forward a MediaElement... check it out in the tutorial player itself! Overlapping TabItems with the Silverlight Toolkit TabControl Antoni Dol has a very cool tutorial up on the Toolkit TabItems control... not only is he overlapping them quite nicely but this is a very cool tutorial... QuoteFloat: Animating TextBlock PlaneProjections for a spiraling effect in Silverlight Antoni Dol also has a Blend tutorial up on animating TextBlock items... run the demo and you'll want to read the rest :) Adventures in MVVM – My ViewModel Base – Silverlight Support! Brian Genisio continues his MVVM tutorials with this update on his ViewModel base using some new C# 4.0 features, and fully supports Silverlight and WPF My Thoughts on the Windows Phone 7 Shawn Wildermuth gives his take on WP7. He included a port of his XBoxGames app to WP7 ... thanks Shawn! Silverlight Tip of the Day #20 – Using Tooltips in Silverlight I figured Mike Snow was going to overrun me with tips since I have missed a couple days, but there's only one! ... and it's on Tooltips. Animating the Silverlight opacity mask Phil Middlemiss has an article at SilverZine describing a Behavior he wrote (and is sharing) that turns a FrameworkElement into an opacity mask for it's parent container... cool demo on the page too. Breaking Apart the Margin Property in Xaml for better Binding Pete Brown dug in on a Twitter message and put some thoughts down about breaking a Margin apart to see about binding to the individual elements. Building a Simple Windows Phone App Kirupa has a 6-part tutorial up on building not-your-typical first WP7 application... all good stuff! Integrating HTML into Silverlight Applications Dan Wahlin has a post up discussing three ways to display HTML inside a Silverlight app. Hello MEF in Silverlight 4 and VB! (with an MVVM Light cameo) Glenn Block has a post up discussing MEF, MVVM, and it's in VB this time... and it's actually a great tutorial top to bottom... all source included of course :) Understanding Input Scope in Silverlight for Windows Phone Jeff Prosise has a good post up on the WP7 SIP and how to set the proper InputScope to get the SIP you want. Thinking about Silverlight ‘desktop’ apps – Creating a Standalone Installer for offline installation (no browser) Anoop Madhusudanan is discussing something that's been floating around for a while... installing Silverlight from, say, a CD or DVD when someone installs your app. He's got some good code, but be sure to read Tim Heuer and Scott Guthrie's comments, and consider digging deeper into that part. Using FluidMoveBehavior to animate grid coordinates in Silverlight Adam Kinney has a cool post up on animating an object using the FluidMotionBehavior of Blend 4... looks great moving across a checkerboard... check out the demo, then grab the code. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • OS Development. Only Few Particular Questions

    - by Total Anime Immersion
    I am new to this site as a member but have consulted its answers quite a lot of times. Besides my questions regarding OS Development hasn't been answered in any forum. In OS Dev. we make a bootloader. The org point is 7C00H. Why so? Why not 0000h? What are the last two signatures in the bootloader used for? People on every forum have answered that it is important for the system to recognize it as a bootable media. But I want a specific answer. What do each of those signatures do. I have the basic concept of a kernel. Point is.. it relates to different files required in a system. It sort of binds up everything that is individually developed. Now the thing is that that I have floating ideas in my mind regarding different aspects like keyboard, mouse, etc.. how do I put them all together? Which should I start with first? If possible please provide a step by step procedure of the startups of the kernel. Suppose I have developed my language entirely in C and Assembly. Now questions is will exe files work on my system.. if it doesn't then I have to create my own files and publish them. Which is a bad idea.. next step would be for me to go for a compiler for a language which I have designed myself. Now the point is.. How do I implement the compiler into my OS? After all this my final question is that.. How do you go about multitasking and multithreading? and I don't want to use int 21h as its dos specific.. how do I go about making files, renaming them, etc. and all assembly books teach 16 but programming.. how do i go about doing 32 bit or 64 bit with the knowledge I have.. if the basics and instructions are the same.. I don't mind.. but how do i go about otherwise? Don't tell me to give up the idea because I WON'T. And don't tell me it's too complex because I have a sharp knowledge of working of a system, C, Java, Assembly, C++ and python, C#, visual basic.. and not just basics but full fledged api developments.. but I really want to go deep into the systems part.. so I want professional help.. And I have gone through many OS project files but I want help particularly from this site as there are people with knowledge depth who can guide me the right way. And please don't suggest any books above 20$ and they should be available on flipkart as amazon charges massively for shipping and I prefer free shipping from flipkart.

    Read the article

  • Speeding up procedural texture generation

    - by FalconNL
    Recently I've begun working on a game that takes place in a procedurally generated solar system. After a bit of a learning curve (having neither worked with Scala, OpenGL 2 ES or Libgdx before), I have a basic tech demo going where you spin around a single procedurally textured planet: The problem I'm running into is the performance of the texture generation. A quick overview of what I'm doing: a planet is a cube that has been deformed to a sphere. To each side, a n x n (e.g. 256 x 256) texture is applied, which are bundled in one 8n x n texture that is sent to the fragment shader. The last two spaces are not used, they're only there to make sure the width is a power of 2. The texture is currently generated on the CPU, using the updated 2012 version of the simplex noise algorithm linked to in the paper 'Simplex noise demystified'. The scene I'm using to test the algorithm contains two spheres: the planet and the background. Both use a greyscale texture consisting of six octaves of 3D simplex noise, so for example if we choose 128x128 as the texture size there are 128 x 128 x 6 x 2 x 6 = about 1.2 million calls to the noise function. The closest you will get to the planet is about what's shown in the screenshot and since the game's target resolution is 1280x720 that means I'd prefer to use 512x512 textures. Combine that with the fact the actual textures will of course be more complicated than basic noise (There will be a day and night texture, blended in the fragment shader based on sunlight, and a specular mask. I need noise for continents, terrain color variation, clouds, city lights, etc.) and we're looking at something like 512 x 512 x 6 x 3 x 15 = 70 million noise calls for the planet alone. In the final game, there will be activities when traveling between planets, so a wait of 5 or 10 seconds, possibly 20, would be acceptable since I can calculate the texture in the background while traveling, though obviously the faster the better. Getting back to our test scene, performance on my PC isn't too terrible, though still too slow considering the final result is going to be about 60 times worse: 128x128 : 0.1s 256x256 : 0.4s 512x512 : 1.7s This is after I moved all performance-critical code to Java, since trying to do so in Scala was a lot worse. Running this on my phone (a Samsung Galaxy S3), however, produces a more problematic result: 128x128 : 2s 256x256 : 7s 512x512 : 29s Already far too long, and that's not even factoring in the fact that it'll be minutes instead of seconds in the final version. Clearly something needs to be done. Personally, I see a few potential avenues, though I'm not particularly keen on any of them yet: Don't precalculate the textures, but let the fragment shader calculate everything. Probably not feasible, because at one point I had the background as a fullscreen quad with a pixel shader and I got about 1 fps on my phone. Use the GPU to render the texture once, store it and use the stored texture from then on. Upside: might be faster than doing it on the CPU since the GPU is supposed to be faster at floating point calculations. Downside: effects that cannot (easily) be expressed as functions of simplex noise (e.g. gas planet vortices, moon craters, etc.) are a lot more difficult to code in GLSL than in Scala/Java. Calculate a large amount of noise textures and ship them with the application. I'd like to avoid this if at all possible. Lower the resolution. Buys me a 4x performance gain, which isn't really enough plus I lose a lot of quality. Find a faster noise algorithm. If anyone has one I'm all ears, but simplex is already supposed to be faster than perlin. Adopt a pixel art style, allowing for lower resolution textures and fewer noise octaves. While I originally envisioned the game in this style, I've come to prefer the realistic approach. I'm doing something wrong and the performance should already be one or two orders of magnitude better. If this is the case, please let me know. If anyone has any suggestions, tips, workarounds, or other comments regarding this problem I'd love to hear them.

    Read the article

  • SQL SERVER – Partition Parallelism Support in expressor 3.6

    - by pinaldave
    I am very excited to learn that there is a new version of expressor’s data integration platform coming out in March of this year.  It will be version 3.6, and I look forward to using it and telling everyone about it.  Let me describe a little bit more about what will be so great in expressor 3.6: Greatly enhanced user interface Parallel Processing Bulk Artifact Upgrading The User Interface First let me cover the most obvious enhancements. The expressor Studio user interface (UI) has had some significant work done. Kudos to the expressor Engineering team; the entire UI is a visual masterpiece that is very responsive and intuitive. The improvements are more than just eye candy; they provide significant productivity gains when developing expressor Dataflows. Operator shape icons now include a description that identifies the function of each operator, instead of having to guess at the function by the icon. Operator shapes and highlighting depict the current function and status: Disabled, enabled, complete, incomplete, and error. Each status displays an appropriate message in the message panel with correction suggestions. Floating or docking property panels provide descriptive tool tips for each property as well as auto resize when adjusting the canvas, without having to search Help or the need to scroll around to get access to the property. Progress and status indicators let you know when an operation is working. “No limit” canvas with snap-to-grid allows automatic sizing and accurate positioning when you have numerous operators in the Dataflow. The inline tool bar offers quick access to pan, zoom, fit and overview functions. Selecting multiple artifacts with a right click context allows you to easily manage your workspace more efficiently. Partitioning and Parallel Processing Partitioning allows each operator to process multiple subsets of records in parallel as opposed to processing all records that flow through that operator in a single sequential set. This capability allows the user to configure the expressor Dataflow to run in a way that most efficiently utilizes the resources of the hardware where the Dataflow is running. Partitions can exist in most individual operators. Using partitions increases the speed of an expressor data integration application, therefore improving performance and load times. With the expressor 3.6 Enterprise Edition, expressor simplifies enabling parallel processing by adding intuitive partition settings that are easy to configure. Bulk Artifact Upgrading Bulk Artifact Upgrading sounds a bit intimidating, but it actually is not and it is a welcome addition to expressor Studio. In past releases, users were prompted to confirm that they wanted to upgrade their individual artifacts only when opened. This was a cumbersome and repetitive process. Now with bulk artifact upgrading, a user can easily select what artifact or group of artifacts to upgrade all at once. As you can see, there are many new features and upgrade options that will prove to make expressor Studio quicker and more efficient.  I hope I’m not the only one who is excited about all these new upgrades, and that I you try expressor and share your experience with me. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SharePoint Saturday DC

    - by Mark Rackley
    Wow… did you see this thing? 927 attendees? An exhibition hall full of vendors? 94 speakers? 100 sessions?? Insane is a word that comes to mind… SharePoint Saturday DC was definitely epic as far as SharePoint Saturdays go. I got to catch up with a lot of friends and make some new ones.  Met a couple of fans of the blog (hello ladies…;))  Did you know that people actually read this thing? I guess that means I need to stop putting so much garbage on here and more content. I’ll get right on that as soon as I find out how to add 6 hours to each day. Anyway, once again I did my “Wrapping Your Head Around the SharePoint Beast” session.  I tweaked it even more from Huntsville and presented to a packed room with some people sitting on the floor and standing in the aisles. It was a great crowd, very interactive and they seemed interested at least. Thank you guys so much for attending and please feel free to tell me of any suggestions you have to make the presentation better.  This is one of the presentations that will probably never die. Everyone beginning SharePoint development needs a good introduction and starting point. My goal is to make this THE session to see on the subject. So, a little interesting data about my class. Half of the room was brand new to SharePoint and only one person was using 2010. That tells me that this session still has legs and that 2007 isn’t going anywhere anytime soon.  I know my organization will be using 2007 for at least a couple more years. Oh yeah… the slide deck?  Here it is: SharePoint Saturday DC Slide Deck So, SharePoint Saturday was truly tremendous and if you weren’t there you missed out. @meetdux, @usher, and the rest of their crew did a spectacular job. You guys rock and are a huge asset to the community. Thanks for allowing me to speak. What’s up next for me?  I’m so glad you asked…. SHAREPOINT SATURDAY OZARKS IS JUNE 12TH! Although SharePoint Saturday Ozarks on June 12 in Harrison Arkansas will be a much more intimate event than DC, it promises to be a most memorable event. We’ve got over 30 speakers and sessions, some cool stuff to give away, and we’re going floating down the Buffalo River on the 13th. Let’s see you do THAT in DC.  :) Anyway, I hope to see you there and I would truly appreciate any help you can do to help publicize the event. We just got internet here in the hills and most people here are still looking for the “any” key….

    Read the article

  • Critical Threads Optimization

    - by Rafael Vanoni
    Background One of the more common issues we've been seeing in the field is the growing difficulty in optimizing performance of multi-threaded applications. A good portion of this difficulty is due to the increasing complexity of modern processors that present various degrees of sharing relationships between hardware components. Take any current CMT processor and you'll find any number of CPUs sharing execution pipelines, floating point units, caches, etc. Consequently, applying the traditional recipe of one software thread for each CPU will have varying degrees of success, according to the layout of the underlying hardware. On top of this increasing complexity we've also seen processors with features that aim at dynamically resourcing software threads according to their utilization. Intel's Turbo Boost allows processors to increase their operating frequency if there is enough thermal headroom available and the processor isn't fully utilized. More recently, the SPARC T4 processor introduced dynamic threading, allowing each core to dynamically allocate more resources to its active CPUs. Both cases are in essence recognizing that current processors will be running a wide mix of workloads, some will be designed for throughput, others for low latency. The hardware is providing mechanisms to dynamically resource threads according to their runtime behavior. We're very aware of these challenges in Solaris, and have been working to provide the best out of box performance while providing mechanisms to further optimize applications when necessary. The Critical Threads Optimzation was introduced in Solaris 10 8/11 and Solaris 11 as one such mechanism that allows customers to both address issues caused by contention over shared hardware resources and explicitly take advantage of features such as T4's dynamic threading. What it is The basic idea is to allow performance critical threads to execute with more exclusive access to hardware resources. For example, when deploying an application that implements a producer/consumer model, it'll likely be advantageous to give the producer more exclusive access to the hardware instead of having it competing for resources with all the consumers. In the case of a T4 based system, we may want to have a producer running by itself on a single core and create one consumer for each of the remaining CPUs. With the Critical Threads Optimization we're extending the semantics of scheduling priorities (which thread should run first) to include priority over shared resources (which thread should have more "space"). Now the scheduler will not only run higher priority threads first: it will also provide them with more exclusive access to hardware resources if they are available. How does it work ? Using the previous example in Solaris 11, all you'd have to do would be to place the producer in the Fixed Priority (FX) scheduling class at priority 60, or in the Real Time (RT) class at any priority and Solaris will try to give it more "hardware space". On both Solaris 10 8/11 and Solaris 11 this can be achieved through the existing priocntl(1,2) and priocntlset(2) interfaces. If your application already assigns these priorities to performance critical threads, there's no additional step you need to take. One important aspect of this optimization is that it requires some level of idleness in the system, either as a result of sizing the application before hand or through periods of transient idleness during runtime. If the system is fully committed, the scheduler will put all the available CPUs to work.Best practices If you're an application developer, we encourage you to look into assigning the right priorities for the different threads in your application. Solaris provides different scheduling classes (Time Share, Interactive, Fair Share, Fixed Priority and Real Time) that offer different policies and behaviors. It is not always simple to figure out which set of threads are critical to the performance of a workload, and it may not always be feasible to take advantage of this optimization, but we believe that this can be correctly (and safely) done during development. Overall, the out of box performance in Solaris should meet your workload's requirements. If you are looking into that extra bit of performance, then the Critical Threads Optimization may be what you're looking for.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Arcball Problems with UDK

    - by opdude
    I'm trying to re-create an arcball example from a Nehe, where an object can be rotated in a more realistic way while floating in the air (in my game the object is attached to the player at a distance like for example the Physics Gun) however I'm having trouble getting this to work with UDK. I have created an LGArcBall which follows the example from Nehe and I've compared outputs from this with the example code. I think where my problem lies is what I do to the Quaternion that is returned from the LGArcBall. Currently I am taking the returned Quaternion converting it to a rotation matrix. Getting the product of the last rotation (set when the object is first clicked) and then returning that into a Rotator and setting that to the objects rotation. If you could point me in the right direction that would be great, my code can be found below. class LGArcBall extends Object; var Quat StartRotation; var Vector StartVector; var float AdjustWidth, AdjustHeight, Epsilon; function SetBounds(float NewWidth, float NewHeight) { AdjustWidth = 1.0f / ((NewWidth - 1.0f) * 0.5f); AdjustHeight = 1.0f / ((NewHeight - 1.0f) * 0.5f); } function StartDrag(Vector2D startPoint, Quat rotation) { StartVector = MapToSphere(startPoint); } function Quat Update(Vector2D currentPoint) { local Vector currentVector, perp; local Quat newRot; //Map the new point to the sphere currentVector = MapToSphere(currentPoint); //Compute the vector perpendicular to the start and current perp = startVector cross currentVector; //Make sure our length is larger than Epsilon if (VSize(perp) > Epsilon) { //Return the perpendicular vector as the transform newRot.X = perp.X; newRot.Y = perp.Y; newRot.Z = perp.Z; //In the quaternion values, w is cosine (theta / 2), where //theta is the rotation angle newRot.W = startVector dot currentVector; } else { //The two vectors coincide, so return an identity transform newRot.X = 0.0f; newRot.Y = 0.0f; newRot.Z = 0.0f; newRot.W = 0.0f; } return newRot; } function Vector MapToSphere(Vector2D point) { local float x, y, length, norm; local Vector result; //Transform the mouse coords to [-1..1] //and inverse the Y coord x = (point.X * AdjustWidth) - 1.0f; y = 1.0f - (point.Y * AdjustHeight); length = (x * x) + (y * y); //If the point is mapped outside of the sphere //( length > radius squared) if (length > 1.0f) { norm = 1.0f / Sqrt(length); //Return the "normalized" vector, a point on the sphere result.X = x * norm; result.Y = y * norm; result.Z = 0.0f; } else //It's inside of the sphere { //Return a vector to the point mapped inside the sphere //sqrt(radius squared - length) result.X = x; result.Y = y; result.Z = Sqrt(1.0f - length); } return result; } DefaultProperties { Epsilon = 0.000001f } I'm then attempting to rotate that object when the mouse is dragged, with the following update code in my PlayerController. //Get Mouse Position MousePosition.X = LGMouseInterfacePlayerInput(PlayerInput).MousePosition.X; MousePosition.Y = LGMouseInterfacePlayerInput(PlayerInput).MousePosition.Y; newQuat = ArcBall.Update(MousePosition); rotMatrix = MakeRotationMatrix(QuatToRotator(newQuat)); rotMatrix = rotMatrix * LastRot; LGMoveableActor(movingPawn.CurrentUseableObject).SetPhysics(EPhysics.PHYS_Rotating); LGMoveableActor(movingPawn.CurrentUseableObject).SetRotation(MatrixGetRotator(rotMatrix));

    Read the article

  • Issues glVertexAttribPointer last 2 parameters?

    - by NoobScratcher
    Introduction Hello I will start out by explaining my setup, showing samples as I go along explaining the situation. I'm using these tools: OpenGL 3.3 GLSL 330 C++ Problem The problem is when I render the wavefront obj 3d model it gives a very weird visual glitch the model was supposed to be a square but instead its a triangluated mess with parts of the vertexes pointing in a stretched direction in massive amounts towards the bottom left side of the frustum.... Explanation: I'm using std::vectors to store my wavefront .obj model data using sscanf to get the floating point values into the structure members x,y,z and store them into the Points structure variable p; int index = IndexAssigner(1, 1); ifstream file (list[index].c_str() ); points.push_back(Point()); Point p; int face[4]; while (!file.eof() ) { char modelbuffer[10000]; file.getline(modelbuffer, 10000); switch(modelbuffer[0]) { case 'v' : sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); break; case 'f': sscanf(modelbuffer, "f %d %d %d %d", face, face+1, face+2, face+3 ); faces.push_back(face[0]); faces.push_back(face[1]); faces.push_back(face[2]); faces.push_back(face[3]); } //Turn on FileReader aka "RENDER CODE" FileReader = true; } then I render the Points vector using the .data() member of std::vectors to the frustum. Other declarations: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); Vector declarations: struct Point {float x, y , z; }; std::vector<int>faces; std::vector<Point>points; Render code: glGenBuffers(1, &vertexbuffer); glGenTextures(1, &ModelTexture); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBindTexture(GL_TEXTURE_3D, ModelTexture); glTexImage2D(GL_TEXTURE_2D, 0,GL_RGBA, ModelSurface->w, ModelSurface->h, 0, GL_BGR, GL_UNSIGNED_BYTE, ModelSurface->pixels); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glBufferData(GL_ARRAY_BUFFER, sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); glEnableVertexAttribArray(3); //Translation Process GLfloat TranslationMatrix[] = { 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0 }; //Send Translation Matrix up to the vertex shader glUniformMatrix4fv(translation, 1, TRUE, TranslationMatrix); glDrawElements( GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data()); I tried looking at what was causing this and went through every function every parameter ,etc looked at the man pages. Then found out that it could be my glVertexAttribPointer. Here are the man pages for glVertexAttribPointer http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttribPointer.xml The last 2 parameters is my problem How do I write those 2 last parameters do I try putting the data from Points into it?. glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); How does it work with vectors? Is it fast?* if you can not be bothered too look at the man pages here is the scripts coming from the man pages directly. Stride Specifies the byte offset between consecutive generic vertex attributes. If stride is 0, the generic vertex attributes are understood to be tightly packed in the array. The initial value is 0. Pointer Specifies a pointer to the first component of the first generic vertex attribute in the array. The initial value is 0. If you want my full source - http://ideone.com/fPfkg Thanks Again if you do read this.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Linux Mint 14 severe overheating

    - by agvares
    I've tried switching to Mint (it was Mint 12) about 6 months ago, but failed to do it due to enormous overheating which made it virtually impossible to run any applications more complex than Gedit. Time had passed, and now I'm back again with Mint 14, but feeling way more determined. What I face are the following issues: 1) Great overheating (and by great I mean that just by doing nothing my CPU temp is floating around 70-75 C, which I find a lot) 2) Running multiple applications (let's say Chrome, Skype and Pidgin) results in critical overheating and immediate shutdown of the system 3) Due to the stuff listed above, my battery drains in about 10-15 minutes, pretty much turning my laptop into a crippled desktop machine I've got a HP-dv6 laptop (i7, 6gb RAM, dual graphics) Here's the output of lspci: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b5) 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b5) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b5) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Whistler XT [AMD Radeon HD 6700M Series] 07:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) 0d:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) 13:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5209 PCI Express Card Reader (rev 01) 19:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) what i've tried to do already: 1) I've edited my grub file to add some "splash" arguments there 2) Installed jupiter and powertop 3) Tried to upgrade to newer kernels (up to 3.8), btw running anything newer than 3.5 results in both resolution and wi-fi detection fail 4) Read lots of forum threads devoted to the topic So, my question is simple: What else might I do to become finally able to use Mint as my default OS without the risk of being burned alive by the CPU heat?

    Read the article

  • Linux bonded Interfaces hanging periodically

    - by David
    I've several hosts that are showing problems with connectivity. When working from the command line, for example, typing is frozen for a second or so, then recovers - then it does it again. The most egregious example host would freeze (input) for 15-30 seconds, then recover and go out 5 seconds later. Switching cables didn't do anything - but removing one of the physical cables caused everything to clear up instantly (which why I think this is a network problem). Looking at the network I couldn't see any packets floating that would explain this. These ethernet interfaces (Gigabit Dell) were working normally previously, but since we moved the systems - and put them on a new set of switches - this has been a problem on multiple theoretically identically-configured hosts. The original switches were an HP Procurve 1810-24G and an HP Procurve 1800-24G connected with LLDP; the new switches are both Cisco SG 200-26, which I understand are rebranded Linksys switches. Is this caused by a problem with the switches? Is it the switch configurations? Are the Cisco switches incapable of handling this? I don't see where the configuration is located; I searched the usual /etc/sysconfig/network/devices but there's nothing in there about options (like mii polling) and nothing about the method of balancing the two. Searching scripts, I can't find anything in /etc/init.d/network either. The hosts are almost all Red Hat Enterprise Linux 5.x systems (5.6, 5.7) but some are Ubuntu Server 10.04.3 Lucid Lynx. I need help with both if it comes to that. UPDATE: We're also seeing some problems with servers on the original switches. The HP switches and the Cisco switches are also interconnected (temporarily); there is a cable run from one switch to the next. Pings on any of these hosts show about one ICMP packet out of every 5-6 getting dropped (timed out). Could there be an interaction between the two switches? Oh, and the hosts are using bonding with Balance-RR as the method.

    Read the article

  • Scriptaculous Shaking Effect Problem

    - by TheOnly92
    The scriptaculous shaking effect somehow produce some bugs for Webkit browsers, including Chrome and Safari. When shaking, the element will shift to the top left of the screen covering everything. An example code is given as below, are there any ways of solving this? <html> <head> <script type='text/javascript' src='http://ajax.googleapis.com/ajax/libs/prototype/1.6.1/prototype.js'></script> <script type='text/javascript' src='http://ajax.googleapis.com/ajax/libs/scriptaculous/1.8.3/scriptaculous.js'></script> <script type='text/javascript' src='http://ajax.googleapis.com/ajax/libs/scriptaculous/1.8.3/scriptaculous.js?load=effects'></script> </head> <body> <div style="z-index: 20000; position: fixed; display: block; bottom: 10px; right: 10px; background-attachment: scroll; background-color: white;" id="floating_text"> <p>This should be some floating text.</p> <p>Some more floating text.</p> </div> <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer dui ligula, tempus adipiscing posuere id, sollicitudin sed nulla. Sed neque diam, volutpat non interdum vel, pellentesque vitae lorem. Vivamus et leo risus. Fusce at nunc nulla, non ultricies elit. Aliquam erat volutpat. Aliquam pulvinar mi at purus laoreet eu varius nisl laoreet. Mauris lobortis sapien diam. Maecenas arcu est, ullamcorper fringilla placerat nec, semper ut arcu. Curabitur metus nisl, ornare nec posuere at, tincidunt tempor nisi. Ut ut est risus. Curabitur elit urna, sagittis sagittis cursus quis, accumsan eget nulla. Donec odio ante, rutrum at fermentum vel, tempus gravida odio. Quisque a ante a urna vehicula posuere ac ut orci. Integer luctus sem et justo condimentum consequat. Phasellus pharetra malesuada velit, et commodo arcu imperdiet vitae. Suspendisse vitae risus orci. Maecenas massa tortor, sodales ut luctus ac, lacinia vitae sapien. Vestibulum sit amet rutrum est. Nullam magna erat, semper a volutpat id, porta sed nisl.</p> <p>Praesent nec consectetur sapien. Integer mollis libero a odio pharetra vulputate. Donec mattis consequat arcu, vel ultricies orci imperdiet sit amet. Mauris sit amet tellus libero. Morbi ac venenatis ligula. Cras tellus neque, porttitor sit amet hendrerit nec, ornare quis tellus. Nam iaculis mi at mi bibendum at commodo justo pretium. Ut in nibh non diam hendrerit fermentum a ut odio. Curabitur lorem turpis, tincidunt et rhoncus et, pulvinar a metus. Vestibulum a quam sit amet arcu condimentum cursus vitae feugiat lectus. Sed ut lorem tellus, non sagittis enim. Curabitur lectus eros, commodo a elementum et, molestie eget est. Donec ullamcorper, arcu nec volutpat auctor, sem odio interdum tellus, nec volutpat lacus libero at nisl. Aliquam metus sapien, aliquam a rutrum ac, tincidunt at purus. Donec in erat mi. Quisque semper mauris in massa bibendum sed tincidunt augue facilisis. In tempus lacinia urna ac tristique.</p> <p>Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Fusce tristique urna sem. Etiam iaculis aliquam dui nec porta. Proin tristique diam non augue mattis tristique. Phasellus nulla erat, adipiscing sed cursus sed, pulvinar eget nisl. Maecenas blandit nibh eu nisl facilisis et semper turpis posuere. Pellentesque auctor sem in massa sollicitudin congue. Vivamus quis lacinia massa. Aliquam sodales dictum magna, eget ullamcorper eros placerat at. Quisque gravida diam sit amet nunc porta aliquam. Ut quis aliquet est. Maecenas risus tellus, euismod id porttitor at, porta id turpis. Phasellus id molestie ante. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Aenean purus nibh, egestas vestibulum aliquet eget, luctus nec eros. Nulla facilisi. Quisque molestie, sem interdum posuere lacinia, nisl purus ornare lectus, id dapibus lacus dolor in ipsum. Aenean pharetra leo nulla.</p> <p>Curabitur nisi quam, iaculis eget pellentesque vel, pretium sed massa. In viverra, tellus at sollicitudin fringilla, orci eros blandit elit, a bibendum mauris dolor ut metus. Vivamus pellentesque suscipit diam, vitae euismod mi pellentesque vitae. Nullam neque libero, vehicula ut iaculis at, tincidunt eget leo. Suspendisse vitae velit justo. Nullam vitae sem tincidunt nulla tincidunt mollis in id massa. Duis rhoncus elementum turpis quis mollis. Vivamus egestas urna in velit commodo iaculis. Aenean quis dolor eu odio porttitor rhoncus nec vel eros. Donec ut est eu nisl vehicula pulvinar et id dolor. Donec a dolor neque. Morbi tempus mattis tortor ut rutrum. Phasellus orci metus, pellentesque vel tincidunt nec, pulvinar eu ante. Duis faucibus felis et diam ullamcorper in feugiat urna dignissim. Quisque nec diam mauris, vel viverra arcu. Cras sagittis dignissim nisl in sagittis. Fusce venenatis rhoncus est, nec elementum libero dapibus eget. Donec eu velit metus. Sed sollicitudin felis a diam condimentum in suscipit neque varius. Nulla nec tortor tristique elit malesuada luctus luctus quis leo.</p> <p>Nullam at quam dui. Ut gravida, tellus malesuada faucibus gravida, purus nulla consequat lorem, pellentesque egestas justo quam et enim. Suspendisse fringilla tellus id odio tristique varius. Cras et metus elit. Etiam interdum adipiscing mollis. Aliquam aliquet vestibulum imperdiet. In consectetur, nunc cursus sodales scelerisque, tellus eros tristique nisl, ut luctus augue dolor vel nibh. Fusce eget dui sed eros tristique varius lacinia id sapien. Nullam ac lorem ac lacus cursus ultricies id a risus. Ut eget dolor sem. Aliquam euismod consequat euismod. Duis sit amet neque et massa ullamcorper tempor.</p> <p>Quisque rutrum, ipsum ac volutpat dictum, urna diam facilisis enim, ac vestibulum justo metus eu mi. Curabitur nunc sem, consequat a mollis non, bibendum vitae dolor. Mauris pulvinar pellentesque tellus, vel aliquet mauris vulputate vel. Morbi eu ante id nulla ultricies tincidunt. Proin porta, felis nec tincidunt iaculis, justo nibh laoreet dolor, eu sollicitudin arcu justo et odio. Sed suscipit tellus lobortis est tristique semper fermentum magna laoreet. Sed eget ante nunc, vitae varius purus. Mauris nec viverra neque. Morbi et lectus velit. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Integer sit amet lobortis magna.</p> <p>Phasellus elementum iaculis sem in consectetur. Curabitur nec dictum enim. Nunc at pellentesque augue. Nulla sit amet sapien neque, et molestie augue. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Proin non elit ante. Mauris justo tellus, feugiat at dapibus a, placerat id felis. Nullam lobortis vehicula rutrum. Fusce tristique pharetra urna, ac scelerisque ipsum consequat eget. Morbi at ipsum in tellus luctus volutpat. Duis placerat accumsan lacus, dictum convallis elit porttitor eu.</p> <p>Sed ac neque sit amet neque luctus rhoncus. Vestibulum sit amet commodo ante. Duis ullamcorper est id dui ullamcorper cursus. Maecenas fringilla ultricies turpis, nec pulvinar libero faucibus a. Quisque bibendum aliquam sapien, in fermentum arcu iaculis at. Mauris bibendum, metus sed rhoncus fringilla, nisl purus interdum eros, vitae malesuada felis est rhoncus magna. Phasellus elit justo, sagittis nec interdum tincidunt, mollis quis justo. Suspendisse rhoncus rutrum vestibulum. Aliquam ut nunc lectus, quis aliquam risus. Aliquam vel nulla sed odio blandit sagittis. Nulla facilisi. Vivamus ullamcorper, lectus facilisis eleifend accumsan, purus massa sollicitudin nunc, in sodales tellus dui eget est. Morbi ipsum nisi, semper sit amet vehicula sit amet, semper at mauris. Nam mollis massa sed risus scelerisque quis congue mauris tempus. Vestibulum nec urna magna, vitae ornare massa. Aenean adipiscing tempor rutrum.</p> <p>In hac habitasse platea dictumst. Etiam in dolor eros, eleifend volutpat magna. Sed blandit gravida feugiat. Sed eu dolor in odio sagittis molestie eget ac orci. Phasellus tellus erat, scelerisque tincidunt lacinia sed, placerat eu sapien. Curabitur lobortis feugiat cursus. Nam eu egestas justo. Nullam dignissim enim ipsum, sed semper orci. Donec nulla dui, viverra vel viverra eu, eleifend nec justo. Sed in ultricies turpis. Maecenas ullamcorper, erat ac scelerisque mattis, augue magna laoreet mauris, nec sagittis tellus enim eget tellus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. In vestibulum urna eu magna ultricies adipiscing. Phasellus sed urna at nibh euismod vestibulum at eget dui. Nulla ullamcorper viverra tellus ut volutpat. Praesent hendrerit, purus a imperdiet tempus, turpis est suscipit felis, ut commodo diam orci ac augue. Quisque consectetur varius sapien, vel lobortis ante porttitor sit amet. Proin fermentum blandit justo, id faucibus elit feugiat ut. Nulla quam elit, tristique gravida ultrices in, imperdiet et enim.</p> <p>Aliquam malesuada, nibh eget laoreet malesuada, lorem ligula gravida eros, a consectetur dui odio id urna. Vivamus tincidunt porttitor facilisis. Maecenas vitae lacus at lorem porttitor sodales. Duis et velit ac ipsum cursus ornare. Aliquam eu rhoncus est. Cras nec facilisis tellus. Nunc in felis odio. Nam facilisis dui eu lacus egestas sit amet malesuada dolor volutpat. In placerat dictum turpis ac vulputate. Suspendisse neque odio, elementum sagittis sollicitudin quis, eleifend ac orci. Proin suscipit molestie orci non venenatis. Sed metus mauris, laoreet id lobortis at, tempor eu erat. Mauris tempor, nisi id interdum tempor, tellus ligula pretium mi, a viverra nibh neque vitae est. Integer mattis, lorem ac congue fermentum, quam ipsum gravida erat, in egestas lorem eros ac massa. Vestibulum lobortis ante libero, vel fermentum ante. Aliquam augue ipsum, ullamcorper sit amet dictum id, commodo sit amet lacus. Vivamus elit purus, elementum a vestibulum quis, iaculis id metus. Cras facilisis orci in nulla consequat gravida. Integer blandit, felis at lacinia porta, lacus velit pretium magna, ut eleifend diam magna a justo. Donec scelerisque diam quis nisi molestie vel egestas urna condimentum. </p> <script type="text/javascript"> Effect.Shake('floating_text'); </script> </body> </html>

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38  | Next Page >