Search Results

Search found 950 results on 38 pages for 'floating'.

Page 25/38 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Live programming help

    - by frazras
    This idea has been floating around my head for a few years. I started some work on it but I just want to know if it is feasible, sensible, or if there is something else like it out there. Dont want to know I was wasting time on a solved issue. Whenever I have a programming issue, this is my sequence: Google it!: That usually brings up a lot of things: blogs, forums, stackoverflow, stackexchange, and even the official docs of the language/framework/cms. Ask on IRC: I format my question and try to get people on IRC to help me. Make a post: I create a post on forums/stackoverflow/stackexchange or shout on twitter with hashtags. Now a lot of the time I am in the middle of a project with a deadline. So I want answers NOW!!! Sometimes just 5-15 minutes worth of attention. Usually by the time I am failing at getting answers at #2, I am imagining how many people are ONLINE NOW with the skill and my exact answer but playing video games, watching youtube or idling online. However, if they were motivated, they would invest the 15 mintes helping me, that would make a world of a difference. I am even in positions where I would PAY for that 15 minutes of instant help. If your rate is as much as $100/hour (relatively good programmer) that is $25 that might save me 3 hours. This help would be live, text chat/skype/phone/screenshare. Should I continue developing this idea or is there a better alternative out there? Or is this even an unfeasible idea?

    Read the article

  • Windows 8 App Downloads Increasing + Over 5,000 Apps Available

    - by David Paquette
    Windows 8 will be unleashed on the general public tomorrow and I thought it would be a good time to review some of the numbers I have been tracking over the last month. Downloads of Windows 8 Apps have been steadily increasing over the last month.  Below is screenshot from the App Summary page for my Windows 8 app.  The blue line is my app, while the orange line is average for the top 5 apps in that subcategory.  Considering the large gap between the 2, I think it is safe to assume that my app is NOT in the top 5 in the subcategory. The spike in the last couple of days is fairly dramatic and I am a little surprised by that.  I would have expected that kind of spike on the days following the official release as opposed to the days leading up to the release.   Finally, the all important App count.  There have been some stories floating around that the Window 8 Store is a ghost town and that there are no apps available.  I think these might be exaggerating the situation a little.  As of this morning, in the US store there are over 5000 apps available for download.  Obviously a far cry from the hundreds of thousands available in other app stores, but we are seeing solid growth in this number. Less than a month ago, that number was 2000. That means the store more than doubled in less than a month. If the growth continues, it won’t be long before the Widows 8 Store is filled with all the apps you need (and a whole lot you don’t need).

    Read the article

  • How to stay productive? What time management software is available?

    - by andrewsomething
    So since I started using askubuntu.com I've spent entirely too much time here answering other people's questions. Now maybe someone could help me with that by answering this one. I'm looking for time management software for Ubuntu. There are a number of these programs floating around for Windows. RescueTime is one that is very popular. The key features that I'd like to see in a linux app that RescueTime has are: Automatically records what application you are using, including what websites you visit. Reports and graphs on your time usage. Notifications for when you have spent too much time on "distractions." While RescueTime doesn't officially support linux, there is an open source RescueTime Linux Uploader. Unfortunately, it seems to only support Firefox and Epiphany for website tracking. I'm a Chromium user. The other major drawback to RescueTime is that it is a web service. I'd much rather not upload detailed information about how I spend my time to some third party. Google already knows too much about me as it is. Project Hamster, a GNOME time management app, comes so close. Sadly, it does not automatically track what you are doing. If I had enough discipline to manually report to an applet what I was up to, I doubt I'd need this. (How cool would it be if they provided some Zeitgeist integration to handle that part?)

    Read the article

  • Adding tolerance to a point in polygon test

    - by David Gouveia
    I've been using this method which was taken from Game Coding Complete to detect whether a point is inside of a polygon. It works in almost every case, but is failing on a few edge cases, and I can't figure out the reason. For example, given a polygon with vertices at (0,0) (0,100) and (100,100), the algorithm is returning: True for any point strictly inside the polygon False for any of the vertices False for (0, 50) which lies on one of the edges of the polygon True (?) for (50,50) which is also on one of the edges of the polygon I'd actually like to relax the algorithm so that it returns true in all of these cases. In other words, it should return true for points that are strictly inside, for the vertices themselves, and for points on the edges of the polygon. If possible I'd also like to give it enough tolerance so that it always tend towards "true" in face of floating point fluctuations. For example, I have another method, that given a line segment and a point, returns the closest location on the line segment to the given point. Currently, given any point outside the polygon and one of its edges, there are cases where the result is categorized as being inside by the method above, while other points are considered outside. I'd like to give it enough tolerance so that it always returns true in this situation. The way I've currently solved the problem is an hack, which consists of using an external library to inflate the polygon by a few pixels, and performing the tests on the inflated polygon, but I'd really like to replace this with a proper solution.

    Read the article

  • Smooth waypoint traversing

    - by TheBroodian
    There are a dozen ways I could word this question, but to keep my thoughts in line, I'm phrasing it in line with my problem at hand. So I'm creating a floating platform that I would like to be able to simply travel from one designated point to another, and then return back to the first, and just pass between the two in a straight line. However, just to make it a little more interesting, I want to add a few rules to the platform. I'm coding it to travel multiples of whole tile values of world data. So if the platform is not stationary, then it will travel at least one whole tile width or tile height. Within one tile length, I would like it to accelerate from a stop to a given max speed. Upon reaching one tile length's distance, I would like it to slow to a stop at given tile coordinate and then repeat the process in reverse. The first two parts aren't too difficult, essentially I'm having trouble with the third part. I would like the platform to stop exactly at a tile coordinate, but being as I'm working with acceleration, it would seem easy to simply begin applying acceleration in the opposite direction to a value storing the platform's current speed once it reaches one tile's length of distance (assuming that the tile is traveling more than one tile-length, but to keep things simple, let's just assume it is)- but then the question is what would the correct value be for acceleration to increment from to produce this effect? How would I find that value?

    Read the article

  • Functional programming compared to OOP with classes

    - by luckysmack
    I have been interested in some of the concepts of functional programming lately. I have used OOP for some time now. I can see how I would build a fairly complex app in OOP. Each object would know how to do things that object does. Or anything it's parents class does as well. So I can simply tell Person().speak() to make the person talk. But how do I do similar things in functional programming? I see how functions are first class items. But that function only does one specific thing. Would I simply have a say() method floating around and call it with an equivalent of Person() argument so I know what kind of thing is saying something? So I can see the simple things, just how would I do the comparable of OOP and objects in functional programming, so I can modularize and organize my code base? For reference, my primary experience with OOP is Python, PHP, and some C#. The languages that I am looking at that have functional features are Scala and Haskell. Though I am leaning towards Scala. Basic Example (Python): Animal(object): def say(self, what): print(what) Dog(Animal): def say(self, what): super().say('dog barks: {0}'.format(what)) Cat(Animal): def say(self, what): super().say('cat meows: {0}'.format(what)) dog = Dog() cat = Cat() dog.say('ruff') cat.say('purr')

    Read the article

  • How to prevent window list "confusion" when detaching eclipse views?

    - by amotzg
    I'm detaching eclipse views to float on my other screen in order to get more coding space on the first screen. When doing that, the detached windows appear in ubuntu's window list applet with the eclipse icon but with no title. Then, when pushing the main eclipse button on the window list, one of the detached views will get to front but not the main eclipse window. When using Alt+tab I can also see the extra eclipse icons but choosing the correct one for the main window works and make it the active window while also showing all detached childs. Other applications behave as expected, e.g. gimp floating panels don't show on the windows list and this is also the case with SlickEdit, Firefox child windows all show on window list but gets the focus correctly, etc. I can see the the workspace switcher show my two screens but in 'Monitor preferences' I see my two screens as one big screen. I'm working with ubuntu 10.04.4 under a VMware Workstation 7.1.3 build-324285. 'uname -a' output: Linux ubuntu 2.6.32-40-generic #87-Ubuntu SMP Tue Mar 6 00:56:56 UTC 2012 x86_64 GNU/Linux The desktop screen shot with the problem, ununtu's version, and Monitor preferences. How can I solve it and make only the main window show in window list or at least get activated when pushing it's button on the window list?

    Read the article

  • Resilient Linux Mail Server Setup

    - by Coops
    How would people design a resilient mail server setup with Linux? On an application level what the system needs to provide is both an incoming and outgoing mail service (i.e. SMTP & IMAP), along with filtering and archive storage (the archive part isn't critical yet, so we'll look at this later probably). What is required on top of this is a resilient system, i.e. one which will handle individual server failures without interrupting service. As such I would term this a High Availability mail system. This is in contrast to a High Performance mail setup, as in our case the volume of mail being handled isn't the important factor, it's simply that it stays online. Having not approached this problem before, the first thing I thought of was a clustered file system (gfs/gluster/etc), combined with heartbeat to failover a floating IP to another box in the case of a server failure. Combined with postfix & dovecot does this sound feasible to people?

    Read the article

  • Excel - add target line to stacked bar chart

    - by Chris W
    I've got a stacked bar chart. I'm displaying a set of floating bars to represent hi/low ranges for some metrics, by using a transparent fill on the bottom section of the bar I achieve the desired look. What I now need to do is add a horizontal line across the chart to indicate how a particular users score relates to all of these hi/low ranges therefore the placement of this line needs to be dynamic based on a value in a cell. Is there anyway to do this as I can't find an easy option. If this was a simple bar chart I could add the target scores as new series and use the line chart type but I don't seem able to overlay a second series on the stacked bar chart. I'm using 2003 at the moment but run this in 2007 if that helps.

    Read the article

  • Companies and Ships

    - by TechnicalWriting
    I have worked for small, medium, large, and extra large companies and they have something in common with ships. These metaphors have been used before, I know, but I will have a go at them.The small company is like a speed boat, exciting and fast, and can turn on a dime, literally. Captain and crew share a lot of the work. A speed boat has a short range and needs to refuel a lot. It has difficulty getting through bad weather. (Small companies often live quarter to quarter. By the way, if a larger company is living quarter to quarter, it is taking on water.)The medium company is is like a battleship. It can maneuver, has a longer range, and the crew is focused on its mission. Its main concern are the other battleships trying to blow it out of the water, but it can respond quickly. Bad weather can jostle it, but it can get through most storms.The large company is like an aircraft carrier; a floating city. It is well-provisioned and can carry a specialized load for a very long range. Because of its size and complexity, it has to be well-organized to be effective and most of its functions are specialized (with little to no functional cross-over). There are many divisions and layers between Captain and crew. It is not very maneuverable; it has to set its course well in advance and have a plan of action.The extra large company is like a cruise liner. It also has to be well-organized and changes in direction are often slow. Some of the people are hard at work behind the scenes to run the ship; others can be along for the ride. They sail the same routes over and over again (often happily) with the occasional cosmetic face-lift to the ship and entertainment. It should stay in warm, friendly waters and avoid risky speed through fields of ice bergs.I have enjoyed my career on the various Ships of Technical Writing, but I get the most of my juice from the battleship where I am closer to the campaign and my contributions have the greater impact on success.Mark Metcalfewww.linkedin.com/in/MarkMetcalfe

    Read the article

  • How do I separate model positions from view positions in MVC?

    - by tieTYT
    Using MVC in games (as opposed to web apps) always confuses me when it comes to the view. How am I supposed to keep the model agnostic of how the view is presenting things? I always end up giving the Model a position that holds x and y but invariably, these values end up being in units of pixels and that feels wrong. I can see the advantage* of avoiding that but how am I supposed to? This idea was suggested: Don't think of it in units of pixels, think of them in arbitrary distance units that just happen map to pixels at a 1:1 ratio. Oh, the resolution is half of what it was? We are now taking the x/y coordinates at 50% value for screen display, and your spells casting range is still 300 units long, which now is 150 pixels. But those numbers conveniently work out. What do I do if the numbers divide in such a way that I get decimal places? Floating points are unsafe. I think allowing decimal places would eventually cause really weird bugs in my game. *It'd let me write the model once and write different views depending on the device.

    Read the article

  • 3 column layout with css display table, with first row having multiple rows?

    - by Damainman
    I am working on a new website which: Has 3 columns - Each Column being a cell First column has 3 rows (Logo, Nav, icons) - Has a Div with display: table which wraps arround 3 divs with display:table-row. Other two columns only have 1 row. With the middle column being the content area. However since this is my first time using display:table, I am running into some things that aren't so clear to me. I was trying to avoid floating divs. If I need multiple rows with one cell in each row per column, do I embed each cell in a row or just create each row and not declare cells. I understand that browsers automatically create the missing elements but I want to make sure I do this properly to avoid any side effects that might occur due to the browser automatically creating the missing elements. Edit: I think my brain is just over worked, I guess I can accomplish this by just using 3 divs in the first column instead of using a nested table div with the rows. This just popped into my head.

    Read the article

  • What is the best way to store meshes or 3d models in a class

    - by Robse
    I am wondering, how I should store my mesh into memory after loading it from whatever file. I have Questions floating in my head: Should a mesh could have sub meshes or does the 3d model just store a list of meshes all on the same level Is there one material assigned to one mesh 1:1? What do I have to consider, if I want to store skeletal animations? Btw it's a OpenGL|ES2 iOS game using GLKit. I came up with some basic struct types: (But I think they are way to simple and I need to add padding or change the vector3 to vector4.) typedef union _N3DShortVector2 { struct { short x, y; }; struct { short s, t; }; short v[2]; } N3DShortVector2; typedef union _N3DShortVector3 { struct { short x, y, z; }; struct { short r, g, b; }; struct { short s, t, p; }; short v[3]; } N3DShortVector3; typedef GLKVector3 N3DFloatVector3; typedef struct _N3DMeshRecordSV3 { N3DShortVector3 v1, v2, v3; } N3DMeshRecordSV3; typedef struct _N3DMeshRecordSV3FN3ST2 { N3DShortVector3 v1, v2, v3; N3DFloatVector3 n1, n2, n3; N3DShortVector2 t1, t2, t3; } N3DMeshRecordSV3FN3ST2;

    Read the article

  • Cheap ways to do scaling ops in shader?

    - by Nick Wiggill
    I've got an extensive world terrain that uses vec3 for the vertex position attribute. That's good, because the terrain has endless gradations due to the use of floating point. But I'm thinking about how to reduce the amount of data uploaded to the GPU. For my terrain, which uses discrete / grid-based vertex positions in x and z, it's pretty clear that I can replace my vec3s (floats, really) with shorts, halving the per-vertex position attribute cost from 12 bytes each to 6 bytes. Considering I've got little enough other vertex data, and an enormous amount of terrain data to push into the world, it's a major gain. Currently in my code, one unit in GLSL shaders is equal to 1m in the world. I like that scale. If I move over to using shorts, though, I won't be able to use the same scale, as I would then have a very blocky world where every step in height is an entire metre. So I see these potential solutions to scale the positional data correctly once it arrives at the vertex shader stage: Use 10:1 scaling, i.e. 1 short unit = 1 decimetre in CPU-side code. Do a division by 10 in the vertex shader to scale incoming decimetre values back to metres. Arbirary (non-PoT) divisions tend to be slow, however. Use (some-power-of-two):1 scaling (eg. 8:1), which enables the use of a bitshift (eg. val >> 3) to do the division... not sure how performant this is in shaders, though. Not as intuitive to read values, but possibly quite a bit faster than div by a non-PoT value. Use a texture as lookup table. I've heard that this is really fast. Or whatever solutions others can offer to achieve the same results -- minimal vertex data with sensible scaling.

    Read the article

  • How can I fix a date that changes by 4 years and 1 day when pasted between Excel workbooks

    - by lcbrevard
    In Excel dates are represented internally by a floating point number where the integer part is the number of days since "some date" and the fractional part is how far into that day (hence the time). You can see this if you change the format of a date - like 4/10/2009 to a number 39905. But when pasting a date between two different workbooks the date shifts by 4 years and one day!!! In other words "some date" is different between the two workbooks. In one workbook the number 0.0 represents 1/0/1900 and in the other 0.0 represents 1/1/1904. Where is this set and is it controllable? Or does this represent a corrupted file? These workbooks where originally from Excel 2000 but have been worked on now in Excel 2007 and Excel 2003. I can demonstrate the problem between the two workbook files in both 2003 and 2010. The exact history of when they were created or what versions of Excel have been used on each is unknown.

    Read the article

  • Locale number formatting issue windows 2008

    - by kris
    On a project we have multiple servers running windows 2008. The servers are using the Russian locale. We have several programs that use floating point numbers but the fractional part of the number on SOME servers is getting truncated. Through the regional settings each machine has: Locale: Russian Current Location: United States Decimal Symbol: . (period) I've tried distributing the changes through "Copy Settings" and even though the procedure works it seems like the settings aren't actually being propagated. So next I went into the registry. There is a key called "sDecimal" and in all cases on all servers the value in the key is '.' There is no difference that I can find between the servers that DO have correct decimal formatting and DO NOT. Any advice on where I can look for a problem like this?

    Read the article

  • What do ptLineDist and relativeCCW do?

    - by Fasih Khatib
    I saw these methods in the Line2D Java Docs but did not understand what they do? Javadoc for ptLineDist says: Returns the distance from a point to this line. The distance measured is the distance between the specified point and the closest point on the infinitely-extended line defined by this Line2D. If the specified point intersects the line, this method returns 0.0 Doc for relativeCCW says: Returns an indicator of where the specified point (PX, PY) lies with respect to the line segment from (X1, Y1) to (X2, Y2). The return value can be either 1, -1, or 0 and indicates in which direction the specified line must pivot around its first endpoint, (X1, Y1), in order to point at the specified point (PX, PY). A return value of 1 indicates that the line segment must turn in the direction that takes the positive X axis towards the negative Y axis. In the default coordinate system used by Java 2D, this direction is counterclockwise. A return value of -1 indicates that the line segment must turn in the direction that takes the positive X axis towards the positive Y axis. In the default coordinate system, this direction is clockwise. A return value of 0 indicates that the point lies exactly on the line segment. Note that an indicator value of 0 is rare and not useful for determining colinearity because of floating point rounding issues. If the point is colinear with the line segment, but not between the endpoints, then the value will be -1 if the point lies "beyond (X1, Y1)" or 1 if the point lies "beyond (X2, Y2)".

    Read the article

  • how does HDR work?

    - by dotminic
    I'm trying to understand what HDR is and how it works. I understand the basic concepts and have an slight idea of how it is implemented with D3D/hlsl. However it's still pretty foggy. Say I'm rendering a sphere with a texture of the earth and a small point list of vertices to act as stars, how would I render this in HDR ? Here are a few things I'm confused about: I'm guessing, I can't use just any basic image format for the texture as the values would be limited to [0, 255] and clamped to [0, 1] in a shader. Same goes for the back buffer, I take it the format needs to be a float point format ? What are the other steps involved ? Surely there has to be more than just using floating point formats to render to a render target and then apply some bloom as a post process ? (considering the output will be 8bpp anyway) Basically, what are the steps for HDR ? How does it work ? I can't seem to find any good papers / articles that describe the process, other than this one, but it seems to skim over the basics a little, so it's confusing.

    Read the article

  • How to fix Sketchup in Wine when tool starts, but displays empty workspace?

    - by Chaos_99
    I've installed wine 1.6 and winetricks in an Linux Mint 15 system, then downloaded the latest Sketchup2013 'Make' Windows-Installer and installed through wine. I've prepared the wine environment with starting as WINEARCH=win32, installed corefonts and ie8 and enabled the override for the 'riched20' libraries. (I've no idea what the last bit does, but it was advised in some guides.) I've also tried without these steps. Only the win32 seems to make a difference, as the installer will complain about not finding SP2 otherwise. Sketchup is installed successfully and starts, but displays an empty viewport. The program is responsive and everything works, it's just that you can't see anything. I don't get any OpenGL error and the registry entries seem fine, according to the OpenGL issue workarounds floating around the net. I still think it has something to do with OpenGL not working properly, maybe not in the wine environment, but in the linux system? I'm running on a Lenovo W520 with Nvida/Intel hybrid cards, but only the NVida card is active and the properitary nvidia (319) drivers are installed. GLXGears runs fine, but clamps at 2x the refresh rate. glxinfo outputs direct rendering: Yes server glx vendor string: NVIDIA Corporation server glx version string: 1.4 I'm willing to try any linux or wine OpenGL tests to narrow down the problem, if you can offer any advise on what to use.

    Read the article

  • Unidentified Window OnStartup

    - by CMP
    Every time I start up windows vista lately, I see a random floating window. It is a tiny little window with no title, and only the resize, maximize and restore buttons. I'd post an image, but I don't have reputation here yet. I can close it, and it does indeed go away, but I would love to figure out what it is and stop it from popping up at all. I used Autohotkey's window spy on it and all I learned is that it is a swing window, which doesn't help me out a whole lot. Is there a good way to identify which process it belongs to and figure out how to kill it?

    Read the article

  • How to clean up orphaned SID's in ACEs in AD?

    - by geoffc
    As a follow up to my question Do backlinks clear in AD for deleted users I have another related but different question. Since I am informed in the answers there that a deleted object's SID (Group or User, so assigning rights to group only minimizes the issue, and does not fix it) will remain within ACEs they have been assigned, leaving them orphaned. Lotus Domino, which has similar issues with back references, has an adminp process to clean up such orphaned references. Is there a similar process in AD that would allow you to clean up such orphaned SIDs floating around your domain?

    Read the article

  • What kinds of languages would be most useful for this kind of webapp?

    - by Caedar
    I've had some experience with programming in the past (2-3 years of C++ self-teaching), so I'm no stranger to the programming process, but there are so many languages out there that I'm lost when thinking about this project idea that's been floating around my head: I would like to create a webapp that would be used for helping somebody figure out what kinds of productivity tools would suit them. The first part of the app would basically be a survey with a variety of questions that would help weed out tools that wouldn't be useful for them. (Slider bar between minimalist and maximizer, slider bar between all free apps and no cost limit, checkboxes on what platforms are required, etc.) While the person is filling out the survey, they will see a web of applications, webapps, and other tools forming on the screen with links showing the relationships the programs have with eachother (syncing supported, good combinations of apps, etc.), along with a list of applications below sorted by general use (notetaking, document organization, storage, etc.) I would imagine that each program entered into the database that will be accessed would have a certain set of characteristics, ie. price, user friendliness, platforms supported, general uses, etc. and the survey would be designed to correlate to those elements and remove programs that don't match the criteria set. The difficult part of this entire process would be getting the web of applications to arrange itself and render properly. Now that I've finished mind-dumping, onto my question: What kinds/combinations of programming languages would you imagine being useful for this kind of project, and why? I learn best by setting up a project for myself like this one and tinkering with the languages, so I don't mind if the end product is out of reach from my current skill level. I'd just like some guidance so I don't fumble in the dark for too long.

    Read the article

  • Does the .NET Framework need to be reoptimized after upgrading to a new CPU microarchitecture?

    - by Louis
    I believe that the .NET Framework will optimize certain binaries targeting features specific to the machine it's installed on. After changing the CPU from an Intel Nehalem to a Haswell chip, should the optimization be run again manually? If so, what is the process for that? Between generations here are some notable additions: Westmere: AES instruction set Sandy Bridge: Advanced Vector Extensions Ivy Bridge: RdRand (hardware random number generator), F16C (16-bit Floating-point conversion instructions) Haswell: Haswell New Instructions (includes Advanced Vector Extensions 2 (AVX2), gather, BMI1, BMI2, ABM and FMA3 support) So my, albeit naive, thought process was that the optimizations could take advantage of these in general cases. For example, perhaps calls to the Random library could utilize the hardware-RNG on Ivy Bridge and later models.

    Read the article

  • VFP Unit Matrix Multiply problem on the iPhone

    - by Ian Copland
    Hi. I'm trying to write a Matrix3x3 multiply using the Vector Floating Point on the iPhone, however i'm encountering some problems. This is my first attempt at writing any ARM assembly, so it could be a faily simple solution that i'm not seeing. I've currently got a small application running using a maths library that i've written. I'm investigating into the benifits using the Vector Floating Point Unit would provide so i've taken my matrix multiply and converted it to asm. Previously the application would run without a problem, however now my objects will all randomly disappear. This seems to be caused by the results from my matrix multiply becoming NAN at some point. Heres the code IMatrix3x3 operator*(IMatrix3x3 & _A, IMatrix3x3 & _B) { IMatrix3x3 C; //C++ code for the simulator #if TARGET_IPHONE_SIMULATOR == true C.A0 = _A.A0 * _B.A0 + _A.A1 * _B.B0 + _A.A2 * _B.C0; C.A1 = _A.A0 * _B.A1 + _A.A1 * _B.B1 + _A.A2 * _B.C1; C.A2 = _A.A0 * _B.A2 + _A.A1 * _B.B2 + _A.A2 * _B.C2; C.B0 = _A.B0 * _B.A0 + _A.B1 * _B.B0 + _A.B2 * _B.C0; C.B1 = _A.B0 * _B.A1 + _A.B1 * _B.B1 + _A.B2 * _B.C1; C.B2 = _A.B0 * _B.A2 + _A.B1 * _B.B2 + _A.B2 * _B.C2; C.C0 = _A.C0 * _B.A0 + _A.C1 * _B.B0 + _A.C2 * _B.C0; C.C1 = _A.C0 * _B.A1 + _A.C1 * _B.B1 + _A.C2 * _B.C1; C.C2 = _A.C0 * _B.A2 + _A.C1 * _B.B2 + _A.C2 * _B.C2; //VPU ARM asm for the device #else //create a pointer to the Matrices IMatrix3x3 * pA = &_A; IMatrix3x3 * pB = &_B; IMatrix3x3 * pC = &C; //asm code asm volatile( //turn on a vector depth of 3 "fmrx r0, fpscr \n\t" "bic r0, r0, #0x00370000 \n\t" "orr r0, r0, #0x00020000 \n\t" "fmxr fpscr, r0 \n\t" //load matrix B into the vector bank "fldmias %1, {s8-s16} \n\t" //load the first row of A into the scalar bank "fldmias %0!, {s0-s2} \n\t" //calulate C.A0, C.A1 and C.A2 "fmuls s17, s8, s0 \n\t" "fmacs s17, s11, s1 \n\t" "fmacs s17, s14, s2 \n\t" //save this into the output "fstmias %2!, {s17-s19} \n\t" //load the second row of A into the scalar bank "fldmias %0!, {s0-s2} \n\t" //calulate C.B0, C.B1 and C.B2 "fmuls s17, s8, s0 \n\t" "fmacs s17, s11, s1 \n\t" "fmacs s17, s14, s2 \n\t" //save this into the output "fstmias %2!, {s17-s19} \n\t" //load the third row of A into the scalar bank "fldmias %0!, {s0-s2} \n\t" //calulate C.C0, C.C1 and C.C2 "fmuls s17, s8, s0 \n\t" "fmacs s17, s11, s1 \n\t" "fmacs s17, s14, s2 \n\t" //save this into the output "fstmias %2!, {s17-s19} \n\t" //set the vector depth back to 1 "fmrx r0, fpscr \n\t" "bic r0, r0, #0x00370000 \n\t" "orr r0, r0, #0x00000000 \n\t" "fmxr fpscr, r0 \n\t" //pass the inputs and set the clobber list : "+r"(pA), "+r"(pB), "+r" (pC) : :"cc", "memory","s0", "s1", "s2", "s8", "s9", "s10", "s11", "s12", "s13", "s14", "s15", "s16", "s17", "s18", "s19" ); #endif return C; } As far as i can see that makes sence. While debugging i've managed to notice that if i were to say _A = C prior to the return and after the ASM, _A will not necessarily be equal to C which has only increased my confusion. I had thought it was possibly due to the pointers I'm giving to the VFPU being incrimented by lines such as "fldmias %0!, {s0-s2} \n\t" however my understanding of asm is not good enough to properly understand the problem, nor to see an alternative approach to that line of code. Anyway, I was hoping someone with a greater understanding than me would be able to see a solution, and any help would be greatly appreciated, thank you :-)

    Read the article

  • Convert NSData to primitive variable with ieee-754 or twos-complement ?

    - by William GILLARD
    Hi every one. I am new programmer in Obj-C and cocoa. Im a trying to write a framework which will be used to read a binary files (Flexible Image Transport System or FITS binary files, usually used by astronomers). The binary data, that I am interested to extract, can have various formats and I get its properties by reading the header of the FITS file. Up to now, I manage to create a class to store the content of the FITS file and to isolate the header into a NSString object and the binary data into a NSData object. I also manage to write method which allow me to extract the key values from the header that are very valuable to interpret the binary data. I am now trying to convert the NSData object into a primitive array (array of double, int, short ...). But, here, I get stuck and would appreciate any help. According to the documentation I have about the FITS file, I have 5 possibilities to interpret the binary data depending on the value of the BITPIX key: BITPIX value | Data represented 8 | Char or unsigned binary int 16 | 16-bit two's complement binary integer 32 | 32-bit two's complement binary integer 64 | 64-bit two's complement binary integer -32 | IEEE single precision floating-point -64 | IEEE double precision floating-point I already write the peace of code, shown bellow, to try to convert the NSData into a primitive array. // self reefer to my FITS class which contain a NSString object // with the content of the header and a NSData object with the binary data. -(void*) GetArray { switch (BITPIX) { case 8: return [self GetArrayOfUInt]; break; case 16: return [self GetArrayOfInt]; break; case 32: return [self GetArrayOfLongInt]; break; case 64: return [self GetArrayOfLongLong]; break; case -32: return [self GetArrayOfFloat]; break; case -64: return [self GetArrayOfDouble]; break; default: return NULL; } } // then I show you the method to convert the NSData into a primitive array. // I restrict my example to the case of 'double'. Code is similar for other methods // just change double by 'unsigned int' (BITPIX 8), 'short' (BITPIX 16) // 'int' (BITPIX 32) 'long lon' (BITPIX 64), 'float' (BITPIX -32). -(double*) GetArrayOfDouble { int Nelements=[self NPIXEL]; // Metod to extract, from the header // the number of element into the array NSLog(@"TOTAL NUMBER OF ELEMENTS [%i]\n",Nelements); //CREATE THE ARRAY double (*array)[Nelements]; // Get the total number of bits in the binary data int Nbit = abs(BITPIX)*GCOUNT*(PCOUNT + Nelements); // GCOUNT and PCOUNT are defined // into the header NSLog(@"TOTAL NUMBER OF BIT [%i]\n",Nbit); int i=0; //FILL THE ARRAY double Value; for(int bit=0; bit < Nbit; bit+=sizeof(double)) { [Img getBytes:&Value range:NSMakeRange(bit,sizeof(double))]; NSLog(@"[%i]:(%u)%.8G\n",i,bit,Value); (*array)[i]=Value; i++; } return (*array); } However, the value I print in the loop are very different from the expected values (compared using official FITS software). Therefore, I think that the Obj-C double does not use the IEEE-754 convention as well as the Obj-C int are not twos-complement. I am really not familiar with this two convention (IEEE and twos-complement) and would like to know how I can do this conversion with Obj-C. In advance many thanks for any help or information.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >