Search Results

Search found 2888 results on 116 pages for 'scale'.

Page 99/116 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Connection Pool Strategy: Good, Bad or Ugly?

    - by Drew
    I'm in charge of developing and maintaining a group of Web Applications that are centered around similar data. The architecture I decided on at the time was that each application would have their own database and web-root application. Each application maintains a connection pool to its own database and a central database for shared data (logins, etc.) A co-worker has been positing that this strategy will not scale because having so many different connection pools will not be scalable and that we should refactor the database so that all of the different applications use a single central database and that any modifications that may be unique to a system will need to be reflected from that one database and then use a single pool powered by Tomcat. He has posited that there is a lot of "meta data" that goes back and forth across the network to maintain a connection pool. My understanding is that with proper tuning to use only as many connections as necessary across the different pools (low volume apps getting less connections, high volume apps getting more, etc.) that the number of pools doesn't matter compared to the number of connections or more formally that the difference in overhead required to maintain 3 pools of 10 connections is negligible compared to 1 pool of 30 connections. The reasoning behind initially breaking the systems into a one-app-one-database design was that there are likely going to be differences between the apps and that each system could make modifications on the schema as needed. Similarly, it eliminated the possibility of system data bleeding through to other apps. Unfortunately there is not strong leadership in the company to make a hard decision. Although my co-worker is backing up his worries only with vagueness, I want to make sure I understand the ramifications of multiple small databases/connections versus one large database/connection pool.

    Read the article

  • How I May Have Taken A Wrong Path in Programming

    - by Ygam
    I am in a major stump right now. I am a BSIT graduate, but I only started actual programming less than a year ago. I observed that I have the following attitude in programming: I tend to be more of a purist, scorning unelegant approaches to solving problems using code I tend to look at anything in a large scale, planning everything before I start coding, either in simple flowcharts or complex UML charts I have a really strong impulse on refactoring my code, even if I miss deadlines or prolong development times I am obsessed with good directory structures, file naming conventions, class, method, and variable naming conventions I tend to always want to study something new, even, as I said, at the cost of missing deadlines I tend to see software development as something to engineer, to architect; that is, seeing how things relate to each other and how blocks of code can interact (I am a huge fan of loose coupling) i.e the OOP thinking I tend to combine OOP and procedural coding whenever I see fit I want my code to execute fast (thus the elegant approaches and refactoring) This bothers me because I see my colleagues doing much better the other way around (aside from the fact that they started programming since our first year in college). By the other way around I mean, they fire up coding, gets the job done much faster because they don't have to really look at how clean their codes are or how elegant their algorithms are, they don't bother with OOP however big their projects are, they mostly use web APIs, piece them together and voila! Working code! CLients are happy, they get paid fast, at the expense of a really unmaintainable or hard-to-read code that lacks structure and conventions, or slow executions of certain actions (which the common reasoning against would be that internet connections are much faster these days, hardware is more powerful). The excuse I often receive is clients don't care about how you write the code, but they do care about how long you deliver it. If it works then all is good. Now, did my "purist" approach to programming may have been the wrong way to start programming? Should I just dump these purist concepts and just code the hell up because I have seen it: clients don't really care how beautifully coded it is?

    Read the article

  • OpenGL fast texture drawing with vertex buffer objects. Is this the way to do it?

    - by Matthew Mitchell
    Hello. I am making a 2D game with OpenGL. I would like to speed up my texture drawing by using VBOs. Currently I am using the immediate mode. I am generating my own coordinates when I rotate and scale a texture. I also have the functionality of rounding the corners of a texture, using the polygon primitive to draw those. I was thinking, would it be fastest to make a VBO with vertices for the sides of the texture with no offset included so I can then use glViewport, glScale (Or glTranslate? What is the difference and most suitable here?) and glRotate to move the drawing position for my texture. Then I can use the same VBO with no changes to draw the texture each time. I could only change the VBO when I need to add coordinates for the rounded corners. Is that the best way to do this? What things should I look out for while doing it? Is it really fastest to use GL_TRIANGLES instead of GL_QUADS in modern graphics cards? Thank you for any answer.

    Read the article

  • Can I architect a web app so it can be deployed to either the cloud or a dedicated server / VPS ? Ho

    - by CAD bloke
    Is there are an architecture versatile enough that it may be deployed to either a cloud server or to a dedicated (or VPS) server with minimal change? Obviously there would be config changes but I'd rather leave the rest of the app consistent, keeping one maintainable codebase. The app would be ASP.NET &/or ASP.MVC. My dev environment is VS 2010. The cloud may, or may not be, Azure. Dedicated or VPS would be Win Server 2008. Probably. It is not a public-facing web site. The web app I have in mind would be a separate deployment for each client. Some clients would be small-scale, some will prefer the app to run on a local intranet rather than on the web. Other clients may prefer the cloud approach for a black-box solution. The app may run for a few hours or it may run indefinitely, it depends on the client and the project. Other than deployment scenarios the apps would be more or less identical. As you may see from the tags, I'm assuming a message-based architecture is probably the most versatile but I'm also used to being wrong about this stuff. All suggestions and pointers welcome regarding general architectures and also specific solutions.

    Read the article

  • Python 3, urllib ... Reset Connection Possible?

    - by Rhys
    In the larger scale of my program the goal of the below code is to filter out all dynamic html in a web-page source code code snippet: try: deepreq3 = urllib.request.Request(deepurl3) deepreq3.add_header("User-Agent","etc......") deepdata3 = urllib.request.urlopen(deepurl3).read().decode("utf8", 'ignore') The following code is looped 3 times in order to identify whether the target web-page is Dynamic (source code is changed at intervals) or not. If the page IS dynamic, the above code loops another 15 times and attempts to filter out the dynamic content. QUESTION: While this filtering method works 80% of the time, some pages will reload ALL 15 times and STILL contain dynamic code. HOWEVER. If I manually close down the Python Shell and re-execute my program, the dynamic html that my 'refresh-page method' could not shake off is no longer there ... it's been replaced with new dynamic html that my 'refresh-page method' cannot shake off. So I need to know, what is going on here? How is re-running my program causing the dynamic content of a page to change. AND, is there any way, any 'reset connection' command I can use to recreate this ... without manually restarting my app. Thanks for your response.

    Read the article

  • using spring, hibernate and scala, is there a better way to load test data than dbunit?

    - by egervari
    Here are some things I really dislike about dbunit: 1) You cannot specify the exact ordering the inserts because dbunit likes to group your inserts by table name, and not by the order you define them in the XML file. This is a problem when you have records depending on other records in other tables, so you have to disable foreign key constraints during your tests... which actually sucks because these foreign key constraints will get fired in production while your tests won't be aware of them! 2) They seem hellbent on forcing you to use an xml namespace to define your xml... and I honestly can't be bothered to do this. I like the data.xml without any namespace. It works. But they are so hellbent on deprecating it. 3) Creating different xml files is hard on a per test basis, so it actually encourages creating data for your entire app. Unfortunately, this process is a little bloated too once the data grows in size and things get inter tangled. There has got to be a better way to split up your test data into chunks without having to copy/paste a lot of the test data across all of your tests. 4) Keeping track of id references in a big xml file is just impossible. If you have 130 domain classes, it just gets bewildering. This model simply does not scale. Is there something less bloated and better in the Spring/Hibernate space? db unit has worn out its welcome and I'm really looking for something better.

    Read the article

  • Strategies for Synchronizing Data Between a Rails App and iPhone App

    - by jessecurry
    I've written many iPhone Applications that have pulled data from web services and I've worked on synchronizing data between an iPhone App and a Web Application, but I've always felt that there is probably a better way to handle the synchronization. I'd like to know what strategies you have used to synchronize data between your iPhone(read: mobile) Apps and your Rails(read: web) Applications. Are there any strategies that scale particularly well? How have you dealt with large amounts of data? (Do you use paged responses?) How do you make sure that data is not overwritten? Is there a reason to avoid Ruby on Rails? if so, can you suggest an alternative? What is better about the alternative? What strategies have failed? Why do you believe that those strategies failed? I would like to be able to keep all of the data modifications on the server, but the particular application I am about to start work on will need the ability to operate while disconnected from the network. The user will be able to update data on the mobile device and update data through the web application. When the user's mobile device connects to the server any local changes will be pushed to the server.

    Read the article

  • How to draw shadows that don't suck?

    - by mystify
    A CAShapeLayer uses a CGPathRef to draw it's stuff. So I have a star path, and I want a smooth drop shadow with a radius of about 15 units. Probably there is some nice functionality in some new iPhone OS versions, but I need to do it myself for a old aged version of 3.0 (which most people still use). I tried to do some REALLY nasty stuff: I created a for-loop and sequentially created like 15 of those paths, transform-scaling them step by step to become bigger. Then assigning them to a new created CAShapeLayer and decreasing it's alpha a little bit on every iteration. Not only that this scaling is mathematically incorrect and sucks (it should happen relative to the outline!), the shadow is not rounded and looks really ugly. That's why nice soft shadows have a radius. The tips of a star shouldn't appear totally sharp after a shadow size of 15 units. They should be soft like cream. But in my ugly solution they're just as s harp as the star itself, since all I do is scale the star 15 times and decrease it's alpha 15 times. Ugly. I wonder how the big guys do it? If you had an arbitrary path, and that path must throw a shadow, how does the algorithm to do that work? Probably the path would have to be expanded like 30 times, point-by-point relative to the tangent of the outline away from the filled part, and just by 0.5 units to have a nice blending. Before I re-invent the wheel, maybe someone has a handy example or link?

    Read the article

  • How to understand existing projects

    - by John
    Hi. I am a trainee developer and have been writing .NET applications for about a year now. Most of the work I have done has involved building new applications (mainly web apps) from scratch and I have been given more or less full control over the software design. This has been a great experience however, as a trainee developer my confidence about whether the approaches I have taken are the best is minimal. Ideally I would love to collaborate with more experienced developers (I find this the best was I learn) however in the company I work for developers tend to work in isolation (a great shame for me). Recently I decided that a good way to learn more about how experienced developers approach their design might be to explore some open source projects. I found myself a little overwhelmed by the projects I looked at. With my level of experience it was hard to understand the body of code I faced. My question is slight fuzzy one. How do developers approach the task of understanding a new medium to large scale project. I found myself pouring over lots of code and struggling to see the wood for the trees. At any one time I felt that I could understand a small portion of the system but not see how its all fits together. Do others get this same feeling? If so what approaches do you take to understanding the project? Do you have any other advice about how to learn design best practices? Any advice will be very much appreciated. Thank you.

    Read the article

  • Using Lucene to index private data, should I have a separate index for each user or a single index

    - by Nathan Bayles
    I am developing an Azure based website and I want to provide search capabilities using Lucene. (structured json objects would be indexed and stored in Lucene and other content such as Word documents, etc. would be indexed in lucene but stored in blob storage) I want the search to be secure, such that one user would never see a document belonging to another user. I want to allow ad-hoc searches as typed by the user. Lastly, I want to query programmatically to return predefined sets of data, such as "all notes for user X". I think I understand how to add properties to each document to achieve these 3 objectives. (I am listing them here so if anyone is kind enough to answer, they will have better idea of what I am trying to do) My questions revolve around performance and security. Can I improve document security by having a separate index for each user, or is including the user's ID as a parameter in each search sufficient? Can I improve indexing speed and total throughput of the system by having a separate index for each user? My thinking is that having separate indexes would allow me to scale the system by having multiple index writers (perhaps even on different server instances) working at the same time, each on their own index. Any insight would be greatly appreciated. Regards, Nate

    Read the article

  • Pre Project Documentation

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • Best practices for (over)using Azure queues

    - by John
    Hi, I'm in the early phases of designing an Azure-based application. One of the things that attracts me to Azure is the scalability, given the variability of the demand I'm likely to expect. As such I'm trying to keep things loosely coupled so I can add instances when I need to. The recommendations I've seen for architecting an application for Azure include keeping web role logic to a minimum, and having processing done in worker roles, using queues to communicate and some sort of back-end store like SQL Azure or Azure Tables. This seems like a good idea to me as I can scale up either or both parts of the application without any issue. However I'm curious if there are any best practices (or if anyone has any experiences) for when it's best to just have the web role talk directly to the data store vs. sending data by the queue? I'm thinking of the case where I have a simple insert to do from the web role - while I could set this up as a message, send it on the queue, and have a worker role pick it up and do the insert, it seems like a lot of double-handling. However I also appreciate that it may be the case that this is better in the long run, in case the web role gets overwhelmed or more complex logic ends up being required for the insert. I realise this might be a case where the answer is "it depends entirely on the situation, check your perf metrics" - but if anyone has any thoughts I'd be very appreciative! Thanks John

    Read the article

  • iPhone: How to use CGContextConcatCTM for saving a transformed image properly?

    - by Irene
    I am making an iPhone application that loads an image from the camera, and then the user can select a second image from the library, move/scale/rotate that second image, and then save the result. I use two UIImageViews in IB as placeholders, and then apply transformations while touching/pinching. The problem comes when I have to save both images together. I use a rect of the size of the first image and pass it to UIGraphicsBeginImageContext. Then I tried to use CGContextConcatCTM but I can't understand how it works: CGRect rect = CGRectMake(0, 0, img1.size.width, img1.size.height); // img1 from camera UIGraphicsBeginImageContext(rect.size); // Start drawing CGContextRef ctx = UIGraphicsGetCurrentContext(); CGContextClearRect(ctx, rect); // Clear whole thing [img1 drawAtPoint:CGPointZero]; // Draw background image at 0,0 CGContextConcatCTM(ctx, img2.transform); // Apply the transformations of the 2nd image But what do I need to do next? What information is being held in the img2.transform matrix? The documentation for CGContextConcatCTM doesn't help me that much unfortunately.. Right now I'm trying to solve it by calculating the points and the angle using trigonometry (with the help of this answer), but since the transformation is there, there has to be an easier and more elgant way to do this, right?

    Read the article

  • Just 2 free months to learn or improve my skills

    - by microspino
    On the 30 of June I will leave my every day work to start as freelance developer. I'd like to set a period of 2 months apart to improve my dev skills. At work I code in C# and during my spare time I enjoyed building Ruby on Rails web applications and creating some Arduino prototypes. I'm something more than junior but I don't feel really a senior developer because I never had a big corporate project built and designed by me with help of other juniors (although I don't think this is really a good definiton of a "senior", It helps describing my feelings). Using a scale from 0 (ignorant) to 10 (proficient like a "samurai") the list below describes my skills that I would like to improve with just 2 months. I've already bought some nice and updated books on all the subjects hereunder: The order doesn't matter C = 1 C# & .Net = 6 Arduino & Processing = 2 Ruby = 5 Rails = 5 HTML/XHTML/CSS = 9 Javascript = 6 Objective-C/iPhone dev = 2 Python = 4 Django = 4 Desing Patterns = 3 Algorythms = 3 Git = 5 I haven't included SQL or Databases in general nor Networking because I spent 10 years working in the past with them and I feel pretty solid for now. As an aside, I've made up some interest in Redis, Node.js, HTML5 reading about them on the web. After two months, since I have to pay my bills, I could go searching for some new job. If learning and developing were really good maybe I could also invest on something I gave birth during them. Can You give me some piece of advice on which you think It's better to improve or develop a learning project on (something like a "summer of code" thing)? The all point Is to see my weaknesses and work on them.

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • Coordinate system problem with the grid control

    - by Jason94
    In my WPF application im trying to visualize some temperature data. I have a list of temperatures for the 7 past days and want to make a point to point line diagram. My problem is with the different koordinatesystems and adjusting data to the grid. XAML: <Grid Height="167" HorizontalAlignment="Left" Margin="6,6,0,0" Name="grid1" VerticalAlignment="Bottom" Width="455" /> C# (draft): http://pastebin.com/6UWkMFj1 scale is a global variable that changes with a slider (1-10). How to i correct my application so the line always is centered? As it is now it starts out centeded but if i crank up the slider to 3-4 the line goes up and above the applicationwindow. I also would like to use the full height of the grid window not just a small piece like images below: http://img32.imageshack.us/i/002wtvu.jpg/ http://img691.imageshack.us/i/001tqco.jpg/ As you can see i have worked out my data so day 1 with temperature 62 F is lower then day 2 with temperature of 76 F but i have scaling issues and placementissues... could somebody straighten out my math? :-)

    Read the article

  • Global mouseMove

    - by Jacob Kofoed
    I have made the following javascript to be used in my jQuery based website. What it does, is to move a slider up/down, and scale the item above higher/smaller. Everything works fine, but since the slider is only a few pixels in height, and the move event is a bit slow (it does not trigger for every pixel) so when I move the mouse fast, the slider can't hold on and the mouse get's out of the slider item. The mouseMove event won't be triggered no more since it is bound to the slider. I guess everything could be fixed by setting the mouseMove global to the whole site, but it won't work, or at least I don't know how to make that work. Should it be bound to document, or body? here is my current code for the slider: $.fn.resize = function (itemToResize) { MinSize = 100; MaxSize = 800; pageYstart = 0; sliderMoveing = false; nuskriverHeight = 0; this.mousedown(function(e) { pageYstart=e.pageY; sliderMoveing = true nuskriverHeight = parseFloat((itemToResize).css('height')); }); this.mouseup(function() { sliderMoveing = false }); this.mousemove(function(e) { if (sliderMoveing) { (itemToResize).css('height', (nuskriverHeight + (e.pageY - pageYstart))); if (parseFloat( (itemToResize).css('height')) > MaxSize) { (itemToResize).css('height', MaxSize) }; if (parseFloat( (itemToResize).css('height')) < MinSize) { (itemToResize).css('height', MinSize) }; }; }); }; Thanks for any help, is much appreciated

    Read the article

  • Guidance required: FIrst time gonna work with real high end database (size = 50GB).

    - by claws
    I got a project of designing a Database. This is going to be my first big scale project. Good thing about it is information is mostly organized & currently stored in text files. The size of this information is 50GB. There are going to be few millions of records in each Table. Its going to have around 50 tables. I need to provide a web interface for searching & browsing. I'm going to use MySQL DBMS. I've never worked with a database more than 200MB before. So, speed & performance was never a concern but I followed things like normalization & Indexes. I never used any kind of testing/benchmarking/queryOptimization/whatever because I never had to care about them. But here the purpose of creating a database is to make it quickly searchable. So, I need to consider all possible aspects in design. I was browsing archives & found: http://stackoverflow.com/questions/1981526/what-should-every-developer-know-about-databases http://stackoverflow.com/questions/621884/database-development-mistakes-made-by-app-developers I'm gonna keep the points mentioned in above answers in mind. What else should I know? What else should I keep in mind?

    Read the article

  • Efficiently select top row for each category in the set

    - by VladV
    I need to select a top row for each category from a known set (somewhat similar to this question). The problem is, how to make this query efficient on the large number of rows. For example, let's create a table that stores temperature recording in several places. CREATE TABLE #t ( placeId int, ts datetime, temp int, PRIMARY KEY (ts, placeId) ) -- insert some sample data SET NOCOUNT ON DECLARE @n int, @ts datetime SELECT @n = 1000, @ts = '2000-01-01' WHILE (@n>0) BEGIN INSERT INTO #t VALUES (@n % 10, @ts, @n % 37) IF (@n % 10 = 0) SET @ts = DATEADD(hour, 1, @ts) SET @n = @n - 1 END Now I need to get the latest recording for each of the places 1, 2, 3. This way is efficient, but doesn't scale well (and looks dirty). SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 1 ORDER BY ts DESC ) t1 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 2 ORDER BY ts DESC ) t2 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 3 ORDER BY ts DESC ) t3 The following looks better but works much less efficiently (30% vs 70% according to the optimizer). SELECT placeId, ts, temp FROM ( SELECT placeId, ts, temp, ROW_NUMBER() OVER (PARTITION BY placeId ORDER BY ts DESC) rownum FROM #t WHERE placeId IN (1, 2, 3) ) t WHERE rownum = 1 The problem is, during the latter query execution plan a clustered index scan is performed on #t and 300 rows are retrieved, sorted, numbered, and then filtered, leaving only 3 rows. For the former query three times one row is fetched. Is there a way to perform the query efficiently without lots of unions?

    Read the article

  • draw rectangle in millimeters doesnt show accurate width

    - by user338081
    I am drawing a rectangle in millimeter on a panel using the following code in c# by entering the width and length in mm at runtime. however the resultant rectangle drawn varies in size in different monitors. I want the rectangle to appear same size irrespective of the running the app in any monitor. Can any1 help me?. currently the width for 10mm measures 12mm and length for 10mm shows 11mm using a scale. I tested the appn on different monitors, there again it shows different length. Is ther anyway that i can show it to be of same width and length? void panel1_Paint(object sender, PaintEventArgs e) { SolidBrush ygBrush = new SolidBrush(Color.YellowGreen); g = panel1.CreateGraphics(); g.PageUnit = GraphicsUnit.Millimeter; int w = Int32.Parse(textBox1.Text.ToString()); int h = Int32.Parse(textBox2.Text.ToString()); rct = new Rectangle(94, 27, w, h); g.FillRectangle(ygBrush, rct); }

    Read the article

  • [C#] Dynamic user-interface, WPF or not?

    - by pieter.lowie
    Hi, I'm currently working at a application that helps people understand how to do there job. You can see it as a personal coach that guides them trough all the steps they need to do that no normal person could keep remembering. In my previous application we had the ability to show the user up to 4 pictures (what proves to be more then enough). The application would load the data and see how many pictures where in every instruction and then sort out the picture in the best fitting way without messing up the scale and resolution of the pictures. This all was done with GDI+ and worked very well. Ofc, change is something that always happens, my bosses came up with some great ideas. So they want to be able to see movies on the screen, animated gif's, 3D models that can rotate or animate. So I think we had pushed GDI+ to it's limits and it's time to look for something different. I have heard and readed about WPF but have no experience with it. Is it even possible to do all what I ask in WPF? And what about the old picture-merging thing I wrote, can we also get it done in wpf? I tried to make some things working but I didn't went as smooth as I hoped. I'm also concerned about the fact that the interface needs to be dynamic, the one moment it should be showing picture with some text above it, the other moment it should be showing another text with a video under it. I would love to hear some opinions here and if you got some other suggestions I should look into pls tell me. Thnx in advance PS: If WPF is the choice, should I convince my boss to change to .net 4.0?

    Read the article

  • How to easily Generate Synth Chords Sounds in Android?

    - by barata7
    How to easily Generate Synth Chords Sounds in Android? I wanna be able to generate dynamically an in game Music using 8bit. Tried with AudioTrack, but did not get good results of nice sounds yet. Any examples out there? I have tried the following code without success: public class BitLoose { private final int duration = 1; // seconds private final int sampleRate = 4200; private final int numSamples = duration * sampleRate; private final double sample[] = new double[numSamples]; final AudioTrack audioTrack; public BitLoose() { audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_8BIT, numSamples, AudioTrack.MODE_STREAM); audioTrack.play(); } public void addTone(final int freqOfTone) { // fill out the array for (int i = 0; i < numSamples; ++i) { sample[i] = Math.sin(2 * Math.PI * i / (sampleRate / freqOfTone)); } // convert to 16 bit pcm sound array // assumes the sample buffer is normalised. final byte generatedSnd[] = new byte[numSamples]; int idx = 0; for (final double dVal : sample) { // scale to maximum amplitude final short val = (short) ((((dVal * 255))) % 255); // in 16 bit wav PCM, first byte is the low order byte generatedSnd[idx++] = (byte) (val); } audioTrack.write(generatedSnd, 0, sampleRate); } public void stop() { audioTrack.stop(); }

    Read the article

  • Zend_Db Enum Values [Closed]

    - by scopus
    I find this solution $metadata = $result->getTable()->info('metadata'); echo $metadata['Continent']['DATA_TYPE']; Hi, I want to get enum values in Zend_Db. My Code: $select = $this->select(); $result = $select->fetchAll(); print_r($result->getTable()); Output: Example Object ( [_name] => country [query] => Zend_Db_Table_Select Object ( [_info:protected] => Array ( [schema] => [name] => country [cols] => Array ( [0] => Code [1] => Continent ) [primary] => Array ( [1] => Code ) [metadata] => Array ( [Continent] => Array ( [SCHEMA_NAME] => [TABLE_NAME] => country [COLUMN_NAME] => Continent [COLUMN_POSITION] => 3 [DATA_TYPE] => enum('Asia','Europe','North America','Africa','Oceania','Antarctica','South America') [DEFAULT] => Asia [NULLABLE] => [LENGTH] => [SCALE] => [PRECISION] => [UNSIGNED] => [PRIMARY] => [PRIMARY_POSITION] => [IDENTITY] => ) I see enum values in data_type but i don't get this values. How can get data_type?

    Read the article

  • Are there any NOSQL-compatible CMS projects?

    - by Michael
    This question is partially related to an older question (Any CMS is Google App Engine compatible?) , but is slightly more general. It seems that in most CMS systems, the most fragile failure point is the database. Traditional database implementations scale poorly and will never be able to handle unforeseen spikes of traffic. Since Google App Engine was designed to help even small businesses overcome that problem, I had the same question that was asked earlier this year with less than satisfactory answers. But more generally, where are the CMS projects that support NOSQL databases? Looking over Wikipedia's list of CMS platforms, I see without much effort that only traditional RDBMS are supported by every single vendor on the list. I would have expected to see at least one or two projects handling CouchDB or similar engines. I understand the complexities of implementing a NOSQL solution to a problem that is typically solved using the relations cleanly expressed in any RDBMS, but there seems to be a rather wide market gap. Since databases are, today, easily outsourced to Google, Amazon, and others which use NOSQL models, I am amazed that there are not more projects actively pursuing this path. Am I simply not aware? Can someone please point me to projects that have real momentum that are developing on this path? I'm looking for two things: a CMS that has as its backbone a NOSQL database enabling easy database outsourcing (hosted MySQL clusters and similar solutions are not what I'm looking for) a project that is built to run on either a PaaS architecture like Google App Engine or an IaaS architecture like Amazon EC2 Any pointers in that direction would be most welcome.

    Read the article

  • Using PHP Frameworks to get Web 2.0 or Ajax and Other Special Features

    - by user504958
    I'm still struggling to understand when or how to use a framework such as Zend or Yii. Here's some of the features I'm going to need on my next project and I don't understand frameworks well enough to know where the framework fits into the picture. I won't say exactly what the project is but think about something like Yelp or Merchant Circle, on a smaller scale of course - a directory project. It will contain a search box and links to all and/or popular categories. 1) Autosuggest in Search box. (I already know how to do this using jQuery) 2) Analyze the search terms entered into the search box to determine if they misspelled a word. Offer to correct the misspelling or automatically correct the word and show relevant results. 3) Offer items, links, or ads that are related to their search term. 4) Allow users to determine which fields are shown. 5) Allow users to sort the results however they choose. 6) Allow editing of records on a grid/list view. Post form without refreshing the page. Delete or Add records without going to a different page or reloading the current page.

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >