Search Results

Search found 16362 results on 655 pages for 'audio interface'.

Page 98/655 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Blogger : refonte en profondeur de la plateforme de blogs de Google, la nouvelle interface sera développée avec Google Web Toolkit

    Google annonce une refonte en profondeur de la plateforme Blogger Sa nouvelle interface sera développée avec Google Web Toolkit Blogger, le célèbre service de publication Web racheté par Google en 2003, est sur le point d'avoir une refonte complète d'après son chef de produit Chang Kim. Il s'agit du changement d'interface le plus important de l'histoire du sixième plus grand site au monde qui s'enorgueillit de ses 400 millions de lecteurs et de ses 500 milliards de mots publiés, répartis sur un demi-milliard de billets de blog depuis la création du service. La nouvelle génération d'interface utilisateur de l'éditeur de billet et du tableau de bord (dashboard) de l'o...

    Read the article

  • Should I implement an interface directly or have the superclass do it?

    - by c_maker
    Is there a difference between public class A extends AbstractB implements C {...} versus... public class A extends AbstractB {...} AbstractB implements C {...} I understand that in both cases, class A will end up conforming to the interface. In the second case, AbstractB can provide implementation for interface methods in C. Is that the only difference? If I do NOT want to provide an implementation for any of the interface methods in AbstractB, which style should I be using? Does using one or the other have some hidden 'documentation' purpose?

    Read the article

  • Digia dévoile un nouveau concept d'interface homme-machine (IHM) en 3D pour les automobiles implémentée avec Qt

    Digia dévoile un nouveau concept d'interface homme-machine (IHM) en 3D pour les automobiles implémentée avec Qt Aux Qt Developer Days 2011 de Munich, Digia Qt Commercial a montré un nouveau concept d'interface homme-machine (IHM) en 3D pour les automobiles pour la prochaine version commerciale de Qt 4.8. Le concept sera utilisé comme une nouvelle plateforme d'innovation pour faciliter l'intégration et la présentation de nouvelles idées technologiques. Le principe d'interaction de l'IHM 3D est prévu pour la facilité et l'intuitivité de son utilisation. L'utilisateur est en mesure de contrôler l'interface utilisateur avec des gestes approximatifs au lieu de pointer précisément. Puisque le système m...

    Read the article

  • WPF Storyboard delay in playing wma files

    - by Rita
    I'm a complete beginner in WPF and have an app that uses StoryBoard to play a sound. public void PlaySound() { MediaElement m = (MediaElement)audio.FindName("MySound.wma"); m.IsMuted = false; FrameworkElement audioKey = (FrameworkElement)keys.FindName("MySound"); Storyboard s = (Storyboard)audioKey.FindResource("MySound.wma"); s.Begin(audioKey); } <Storyboard x:Key="MySound.wma"> <MediaTimeline d:DesignTimeNaturalDuration="1.615" BeginTime="00:00:00" Storyboard.TargetName="MySound.wma" Source="Audio\MySound.wma"/> </Storyboard> I have a horrible lag and sometimes it takes good 10 seconds for the sound to be played. I suspect this has something to do with the fact that no matter how long I wait - The sound doesn't get played until after I leave the function. I don't understand it. I call Begin, and nothing happens. Is there a way to replace this method, or StoryBoard object with something that plays instantly and without a lag?

    Read the article

  • Why does this gstreamer pipeline stall ?

    - by timday
    I've been playing around with gstreamer pipelines using gst-launch. I don't have any problems if I just want to process audio or video separately (to separate files, or to alsasink/ximagesink), but I'm confused by what I need to do to mux the streams back together using, say avimux. This gst-launch-0.10 filesrc location=MVI_2034.AVI ! decodebin name=dec \ dec. ! queue ! audioconvert ! 'audio/x-raw-int,rate=44100,channels=1' ! queue ! mux. \ dec. ! queue ! videoflip 1 ! ffmpegcolorspace ! jpegenc ! queue ! mux. \ avimux name=mux ! filesink location=out.avi just outputs Setting pipeline to PAUSED ... Pipeline is PREROLLING ... and then stalls indefinitely. What's the trick ?

    Read the article

  • What do you use to play sound in iPhone games?

    - by zoul
    Hello! I have a performance-intensive iPhone game I would like to add sounds to. There seem to be about three main choices: (1) AVAudioPlayer, (2) Audio Queues and (3) OpenAL. I’d hate to write pages of low-level code just to play a sample, so that I would like to use AVAudioPlayer. The problem is that it seems to kill the performace – I’ve done a simple measuring using CFAbsoluteTimeGetCurrent and the play message seems to take somewhere from 9 to 30 ms to finish. That’s quite miserable, considering that 25 ms == 40 fps. Of course there is the prepareToPlay method that should speed things up. That’s why I wrote a simple class that keeps several AVAudioPlayers at its disposal, prepares them beforehand and then plays the sample using the prepared player. No cigar, still it takes the ~20 ms I mentioned above. Such performance is unusable for games, so what do you use to play sounds with a decent performance on iPhone? Am I doing something wrong with the AVAudioPlayer? Do you play sounds with Audio Queues? (I’ve written something akin to AVAudioPlayer before 2.2 came out and I would love to spare that experience.) Do you use OpenAL? If yes, is there a simple way to play sounds with OpenAL, or do you have to write pages of code? Update: Yes, playing sounds with OpenAL is fairly simple.

    Read the article

  • JMF microphone volume controller

    - by TacB0sS
    How to obtain the Microphone volume controller in JMF? this is what I have: I tried this implementation concept of yours, but I keep getting a null from the first volume processor when I try to get the stream, here is how I do it: // the device is the media device specifically audio Processor processorForVolume = Manager.createProcessor(device.getLocator()); // wait until configured ProcessorStates newState = new ProcessorStateListener(Processor.Configured).waitForProcessorState(processorForVolume); System.out.println("volumeProcessorState: "+newState); // setting the content descriptor to null - read in another thread this allows to get the gain control processorForVolume.setContentDescriptor(null); // set the track control format to one supported by the device and the track control. // I didn't match it to an RTP allowed format, but I don't think this has anything to do with it... TrackControl[] trackControls = processorForVolume.getTrackControls(); if (trackControls.length == 0) throw new MC_Exception("No track controls where found for this device:", new Object[]{device}); for (TrackControl control : trackControls) trackManipulator.manipulateTrackControls(control); // wait until the processor is realized newState = new ProcessorStateListener(Controller.Realized).waitForProcessorState(processorForVolume); System.out.println("volumeProcessorState: "+newState); // receives the gain control micVolumeController = processorForVolume.getGainControl(); // cannot get the output stream to process further... any suggestions? processor = Manager.createProcessor(processorForVolume.getDataOutput()); new ProcessorStateListener(Processor.Configured).waitForProcessorState(processor); processor.setContentDescriptor(DeviceCapturingManager.RAW_RTP); new ProcessorStateListener(Controller.Realized).waitForProcessorState(processor); this is the output It generates: volumeProcessorState: Configured format set to track control - com.sun.media.ProcessEngine$ProcTControl@1627c16: LINEAR, 48000.0 Hz, 16-bit, Stereo, LittleEndian, Signed volumeProcessorState: Realized and the data output from the processor is Null. I should make clear that when the content descriptor != null I do get an output stream but not the volume controller, and the when it is null I get the controller, but no stream. I try to connect to an audio microphone device Adam.

    Read the article

  • Graphing the pitch (frequency) of a sound

    - by Coronatus
    I want to plot the pitch of a sound into a graph. Currently I can plot the amplitude. The graph below is created by the data returned by getUnscaledAmplitude(): AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(new BufferedInputStream(new FileInputStream(file))); byte[] bytes = new byte[(int) (audioInputStream.getFrameLength()) * (audioInputStream.getFormat().getFrameSize())]; audioInputStream.read(bytes); // Get amplitude values for each audio channel in an array. graphData = type.getUnscaledAmplitude(bytes, this); public int[][] getUnscaledAmplitude(byte[] eightBitByteArray, AudioInfo audioInfo) { int[][] toReturn = new int[audioInfo.getNumberOfChannels()][eightBitByteArray.length / (2 * audioInfo. getNumberOfChannels())]; int index = 0; for (int audioByte = 0; audioByte < eightBitByteArray.length;) { for (int channel = 0; channel < audioInfo.getNumberOfChannels(); channel++) { // Do the byte to sample conversion. int low = (int) eightBitByteArray[audioByte]; audioByte++; int high = (int) eightBitByteArray[audioByte]; audioByte++; int sample = (high << 8) + (low & 0x00ff); if (sample < audioInfo.sampleMin) { audioInfo.sampleMin = sample; } else if (sample > audioInfo.sampleMax) { audioInfo.sampleMax = sample; } toReturn[channel][index] = sample; } index++; } return toReturn; } But I need to show the audio's pitch, not amplitude. Fast Fourier transform appears to get the pitch, but it needs to know more variables than the raw bytes I have, and is very complex and mathematical. Is there a way I can do this?

    Read the article

  • Python on Mac: Fink? MacPorts? Builtin? Homebrew? Binary installer?

    - by BastiBechtold
    For the last few days, I have been trying to use Python for some audio development. The thing is, Mac OSX does not handle uninstalling stuff well. Actually, there is no way to uninstall anything. Once it is on your system, you better pray that it didn't do any funny stuff. Hence, I don't really want to rely on installer packages for Python. So I turn to Homebrew and install Python using Homebrew. Works fabulously. Using pip, Numpy, SciPy, Matplotlib were no (big) problem, either. Now I want to play audio. There is a host of different packages out there, but pip does not seem willing to install any. But, there is a binary distribution for PyGame, which I guess should work with the built-in Python. Hence my question: What would you do? Would you just install the binary distributions and hope that they interoperate well and never need uninstalling? Would you hack your way through whichever package control management system you prefer and deal with its problems? Something else?

    Read the article

  • Syncing two AS3 NetStreams

    - by Lowgain
    I'm writing an app that requires an audio stream to be recording while a backing track is played. I have this working, but there is an inconsistent gap in between playback and record starting. I don't know if I can do anything to make the sync perfect every time, so I've been trying to track what time each stream starts so I can calculate the delay and trim it server-side. This also has proved to be a challenge as no events seem to be sent when a connection starts (as far as I know). I've tried using various properties like the streams' buffer sizes, etc. I'm thinking now that as my recorded audio is only mono, I may be able to put some kind of 'control signal' on the second stereo track which I could use to determine exactly when a sound starts recording (or stick the whole backing track in that channel so I can sync them that way). This leaves me with the new problem of properly injecting this sound into the NetStream. If anyone has any idea whether or not any of these ideas will work, how to execute them, or some alternatives, that would be extremely helpful! Been working on this issue for awhile

    Read the article

  • I never really understood: what is Application Binary Interface (ABI)?

    - by claws
    I never clearly understood what is an ABI. I'm sorry for such a lengthy question. I just want to clearly understand things. Please don't point me to wiki article, If could understand it, I wouldn't be here posting such a lengthy post. This is my mindset about different interfaces: TV remote is an interface between user and TV. It is an existing entity but useless (doesn't provide any functionality) by itself. All the functionality for each of those buttons on the remote is implemented in the Television set. Interface: It is a "existing entity" layer between the functionality and consumer of that functionality. An, interface by itself is doesn't do anything. It just invokes the functionality lying behind. Now depending on who the user is there are different type of interfaces. Command Line Interface(CLI) commands are the existing entities, consumer is the user and functionality lies behind. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: commands consumer: user Graphical User Interface(GUI) window,buttons etc.. are the existing entities, again consumer is the user and functionality lies behind. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: window,buttons etc.. consumer: user Application Programming Interface(API) functions or to be more correct, interfaces (in interfaced based programming) are the existing entities, consumer here is another program not a user. and again functionality lies behind this layer. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: functions, Interfaces(array of functions). consumer: another program/application. Application Binary Interface (ABI) Here is my problem starts. functionality: ??? existing entities: ??? consumer: ??? I've wrote few softwares in different languages and provided different kind of interfaces (CLI, GUI, API) but I'm not sure, if I ever, provided any ABI. http://en.wikipedia.org/wiki/Application_binary_interface says: ABIs cover details such as data type, size, and alignment; the calling convention, which controls how functions' arguments are passed and return values retrieved; the system call numbers and how an application should make system calls to the operating system; Other ABIs standardize details such as the C++ name mangling,[2] . exception propagation,[3] and calling convention between compilers on the same platform, but do not require cross-platform compatibility. Who needs these details? Please don't say, OS. I know assembly programming. I know how linking & loading works. I know what exactly happens inside. Where did C++ name mangling come in between? I thought we are talking at the binary level. Where did languages come in between? anyway, I've downloaded the [PDF] System V Application Binary Interface Edition 4.1 (1997-03-18) to see what exactly it contains. Well, most of it didn't make any sense. Why does it contain 2 chapters (4th & 5th) which describe the ELF file format.Infact, these are the only 2 significant chapters that specification. Rest of all the chapters "Processor Specific". Anyway, I thought that it is completely different topic. Please don't say that ELF file format specs are the ABI. It doesn't qualify to be Interface according to the definition. I know, since we are talking at such low level it must be very specific. But I'm not sure how is it "Instruction Set Architecture(ISA)" specific? Where can I find MS Window's ABI? So, these are the major queries that are bugging me.

    Read the article

  • Playing a sequence of sounds without gaps (iPhone)

    - by Fiire
    I thought maybe the fastest way was to go with Sound Services. It is quite efficient, but I need to play sounds in a sequence, not overlapped. Therefore I used a callback method to check when the sound has finished. This cycle produces around 0.3 seconds in lag. I know this sounds very strict, but it is basically the main axis of the program. EDIT: I now tried using AVAudioPlayer, but I can't play sounds in a sequence without using audioPlayerDidFinishPlaying since that would put me in the same situation as with the callback method of SoundServices. EDIT2: I think that if I could somehow get to join the parts of the sounds I want to play into a large file, I could get the whole audio file to sound continuously. EDIT3: I thought this would work, but the audio overlaps: waitTime = player.deviceCurrentTime; for (int k = 0; k < [colores count]; k++) { player.currentTime = 0; [player playAtTime:waitTime]; waitTime += player.duration; } Thanks

    Read the article

  • MVC - Cocoa interface - Cocoa Design pattern book

    - by Idan
    So I started reading this book: http://www.amazon.com/Cocoa-Design-Patterns-Erik-Buck/dp/0321535022 On chapter 2 it explains about the MVC design pattern and gives and example which I need some clarification to. The simple example shows a view with the following fields: hourlyRate, WorkHours, Standarthours , salary. The example is devided into 3 parts : View - contains some text fiels and a table (the table contains a list of employees' data). Controller - comprised of NSArrayController class (contains an array of MyEmployee) Model - MyEmployee class which describes an employee. MyEmployee class has one method which return the salary according to the calculation logic, and attributes in accordance with the view UI controls. MyEmployee inherits from NSManagedObject. Few things i'm not sure of : 1. Inside the MyEmplpyee class implemenation file, the calculation method gets the class attributes using sentence like " [[self valueForKey:@"hourlyRate"] floatValue];" Howevern, inside the header there is no data member named hourlyRate or any of the view fields. I'm not quite sure how does it work, and how it gets the value from the right view field. (does it have to be the same name as the field name in the view). maybe the conncetion is made somehow using the Interface builder and was not shown in the book ? and more important: 2. how does it seperate the view from the model ? let's say ,as the book implies might happen, I decide one day to remove one of the fields in the view. as far as I understand, that means changing the way the salary method works in MyEmplpyee (cause we have one field less) , and removing one attribute from the same calss. So how is that separate the View from the Model if changing one reflect on the other ? I guess I get something wrong... Any comments ? Thanks

    Read the article

  • Moq: Unable to cast to interface

    - by Pickels
    Hello, earlier today I asked this question. So since moq creates it's own class from an interface I wasn't able to cast it to a different class. So it got me wondering what if I created a ICustomPrincipal and tried to cast to that. This is how my mocks look: var MockHttpContext = new Mock<HttpContextBase>(); var MockPrincipal = new Mock<ICustomPrincipal>(); MockHttpContext.SetupGet(h => h.User).Returns(MockPrincipal.Object); In the method I am trying to test the follow code gives the error(again): var user = (ICustomPrincipal)httpContext.User; The error is the following: Unable to cast object of type 'IPrincipalProxy4081807111564298854aabfc890edcc8' to type 'MyProject.Web.ICustomPrincipal'. I guess I still need some practice with interfaces and moq but shouldn't I be able to cast the class that moq created back to ICustomPrincipal? I know httpContext.User returns an IPrincipal so maybe something gets lost there? Well if anybody can help me I would appreciate that. Pickels Edit: As requested the full code of the method I am testing. It's still not finished but this is what I have so far: public bool AuthorizeCore(HttpContextBase httpContext) { if (httpContext == null) { throw new ArgumentNullException("httpContext"); } var user = (ICustomPrincipal)httpContext.User; if (!user.Identity.IsAuthenticated) { return false; } return true; }

    Read the article

  • Cannot find interface declaration for 'UIResponder'

    - by lfe-eo
    Hi all, I am running into this issue when trying to compile code for an iPhone app that I inherited from a previous developer. I've poked around on a couple forums and it seems like the culprit may be a circular #import somewhere. First - Is there any easy way to find if this is the case/find what files the loop is in? Second - Its definitely possible this isn't the problem. This is the full error (truncated file paths so its easier to view here): In file included from [...]/Frameworks/UIKit.framework/Headers/UIView.h:9, from [...]/Frameworks/UIKit.framework/Headers/UIActivityIndicatorView.h:8, from [...]/Frameworks/UIKit.framework/Headers/UIKit.h:11, from /Users/wbs/Documents/EINetIPhone/EINetIPhone_Prefix.pch:13: [...]/Frameworks/UIKit.framework/Headers/UIResponder.h:15: error: expected ')' before 'UIResponder' [...]/Frameworks/UIKit.framework/Headers/UIResponder.h:17: error: expected '{' before '-' token [...]/Frameworks/UIKit.framework/Headers/UIResponder.h:42: warning: '@end' must appear in an @implementation context [...]/Frameworks/UIKit.framework/Headers/UIResponder.h:51: error: expected ':' before ';' token [...]/Frameworks/UIKit.framework/Headers/UIResponder.h:58: error: cannot find interface declaration for 'UIResponder' As you can see, there are other errors alongside this one. These seem to be simple syntax errors, however, they appear in one of Apple's UIKit files (not my own) so I am seriously doubting that Apple's code is truly producing these errors. I'm stumped as to how to solve this issue. If anyone has any ideas of things I could try or ways/places I could get more info on the problem, I'd really appreciate it. I'm very new to Obj-C and iPhone coding.

    Read the article

  • Designing a fluid Javascript interface to abstract away the asynchronous nature of AJAX

    - by Anurag
    How would I design an API to hide the asynchronous nature of AJAX and HTTP requests, or basically delay it to provide a fluid interface. To show an example from Twitter's new Anywhere API: // get @ded's first 20 statuses, filter only the tweets that // mention photography, and render each into an HTML element T.User.find('ded').timeline().first(20).filter(filterer).each(function(status) { $('div#tweets').append('<p>' + status.text + '</p>'); }); function filterer(status) { return status.text.match(/photography/); } vs this (asynchronous nature of each call is clearly visible) T.User.find('ded', function(user) { user.timeline(function(statuses) { statuses.first(20).filter(filterer).each(function(status) { $('div#tweets').append('<p>' + status.text + '</p>'); }); }); }); It finds the user, gets their tweet timeline, filters only the first 20 tweets, applies a custom filter, and ultimately uses the callback function to process each tweet. I am guessing that a well designed API like this should work like a query builder (think ORMs) where each function call builds the query (HTTP URL in this case), until it hits a looping function such as each/map/etc., the HTTP call is made and the passed in function becomes the callback. An easy development route would be to make each AJAX call synchronous, but that's probably not the best solution. I am interested in figuring out a way to make it asynchronous, and still hide the asynchronous nature of AJAX.

    Read the article

  • Static method , Abstract method , Interface method comparision ?

    - by programmerist
    When i choose these methods? i can not decide which one i must prefer or when will i use one of them?which one give best performance? First Type Usage public abstract class _AccessorForSQL { public virtual bool Save(string sp, ListDictionary ld, CommandType cmdType); public virtual bool Update(); public virtual bool Delete(); public virtual DataSet Select(); } class GenAccessor : _AccessorForSQL { DataSet ds; DataTable dt; public override bool Save(string sp, ListDictionary ld, CommandType cmdType) { } public override bool Update() { return true; } public override bool Delete() { return true; } public override DataSet Select() { DataSet dst = new DataSet(); return dst; } Second Type Usage Also i can write it below codes: public class GenAccessor { public Static bool Save() { } public Static bool Update() { } public Static bool Delete() { } } Third Type Usage Also i can write it below codes: public interface IAccessorForSQL { bool Delete(); bool Save(string sp, ListDictionary ld, CommandType cmdType); DataSet Select(); bool Update(); } public class _AccessorForSQL : IAccessorForSQL { private DataSet ds; private DataTable dt; public virtual bool Save(string sp, ListDictionary ld, CommandType cmdType) { } } } I can use first one below usage: GenAccessor gen = New GenAccessor(); gen.Save(); I can use second one below usage: GenAccessor.Save(); Which one do you prefer? When will i use them? which time i need override method ? which time i need static method?

    Read the article

  • Designing a fluid Javascript interface to hide callback asynchrony

    - by Anurag
    How would I design an API to hide the asynchronous nature of AJAX and HTTP requests, or basically delay it to provide a fluid interface. To show an example from Twitter's new Anywhere API: // get @ded's first 20 statuses, filter only the tweets that // mention photography, and render each into an HTML element T.User.find('ded').timeline().first(20).filter(filterer).each(function(status) { $('div#tweets').append('<p>' + status.text + '</p>'); }); function filterer(status) { return status.text.match(/photography/); } vs this (asynchronous nature of each call is clearly visible) T.User.find('ded', function(user) { user.timeline(function(statuses) { statuses.first(20).filter(filterer).each(function(status) { $('div#tweets').append('<p>' + status.text + '</p>'); }); }); }); It finds the user, gets their tweet timeline, filters only the first 20 tweets, applies a custom filter, and ultimately uses the callback function to process each tweet. I am guessing that a well designed API like this should work like a query builder (think ORMs) where each function call builds the query (HTTP URL in this case), until it hits a looping function such as each/map/etc., the HTTP call is made and the passed in function becomes the callback. An easy development route would be to make each AJAX call synchronous, but that's probably not the best solution. I am interested in figuring out a way to make it asynchronous, and still hide the asynchronous nature of AJAX.

    Read the article

  • Designing a fluent Javascript interface to abstract away the asynchronous nature of AJAX

    - by Anurag
    How would I design an API to hide the asynchronous nature of AJAX and HTTP requests, or basically delay it to provide a fluid interface. To show an example from Twitter's new Anywhere API: // get @ded's first 20 statuses, filter only the tweets that // mention photography, and render each into an HTML element T.User.find('ded').timeline().first(20).filter(filterer).each(function(status) { $('div#tweets').append('<p>' + status.text + '</p>'); }); function filterer(status) { return status.text.match(/photography/); } vs this (asynchronous nature of each call is clearly visible) T.User.find('ded', function(user) { user.timeline(function(statuses) { statuses.first(20).filter(filterer).each(function(status) { $('div#tweets').append('<p>' + status.text + '</p>'); }); }); }); It finds the user, gets their tweet timeline, filters only the first 20 tweets, applies a custom filter, and ultimately uses the callback function to process each tweet. I am guessing that a well designed API like this should work like a query builder (think ORMs) where each function call builds the query (HTTP URL in this case), until it hits a looping function such as each/map/etc., the HTTP call is made and the passed in function becomes the callback. An easy development route would be to make each AJAX call synchronous, but that's probably not the best solution. I am interested in figuring out a way to make it asynchronous, and still hide the asynchronous nature of AJAX.

    Read the article

  • Proper use of the IDisposable interface

    - by cwick
    I know from reading the MSDN documentation that the "primary" use of the IDisposable interface is to clean up unmanaged resources http://msdn.microsoft.com/en-us/library/system.idisposable.aspx. To me, "unmanaged" means things like database connections, sockets, window handles, etc. But, I've seen code where the Dispose method is implemented to free managed resources, which seems redundant to me, since the garbage collector should take care of that for you. For example: public class MyCollection : IDisposable { private List<String> _theList = new List<String>(); private Dictionary<String, Point> _theDict = new Dictionary<String, Point>(); // Die, you gravy sucking pig dog! public void Dispose() { _theList.clear(); _theDict.clear(); _theList = null; _theDict = null; } My question is, does this make the garbage collector free memory used by MyCollection any faster than it normally would? edit: So far people have posted some good examples of using IDisposable to clean up unmanaged resources such as database connections and bitmaps. But suppose that _theList in the above code contained a million strings, and you wanted to free that memory now, rather than waiting for the garbage collector. Would the above code accomplish that?

    Read the article

  • Building an *efficient* if/then interface for non-technical users to build flow-control in PHP

    - by Brendan
    I am currently building an internal tool to be used by our management to control the flow of traffic. I have built an if/then interface allowing the user to set conditions for certain outcomes, however it is inefficient to use the switch statement to control the flow. How can I improve the efficiency of my code? Example of code: if($previous['route_id'] == $condition['route_id'] && $failed == 0) //if we have not moved on to a new set of rules and we haven't failed yet { switch($condition['type']) { case 0 : $type = $user['hour']; break; case 1 : $type = $user['location']['region_abv']; break; case 2 : $type = $user['referrer_domain']; break; case 3 : $type = $user['affiliate']; break; case 4 : $type = $user['location']['country_code']; break; case 5 : $type = $user['location']['city']; break; } $type = strtolower($type); $condition['value'] = strtolower($condition['value']); switch($condition['operator']) { case 0 : if($type == $condition['value']); else $failed = '1'; break; case 1 : if($type != $condition['value']); else $failed = '1'; break; case 2 : if($type > $condition['value']); else $failed = '1'; break; case 3 : if($type >= $condition['value']); else $failed = '1'; break; case 4 : if($type < $condition['value']); else $failed = '1'; break; case 5 : if($type <= $condition['value']); else $failed = '1'; break; } }

    Read the article

  • Designing a general database interface in PHP

    - by lamas
    I'm creating a small framework for my web projects in PHP so I don't have to do the basic work over and over again for every new website. It is not my goal to create a second CakePHP or Codeigniter and I'm also not planning to build my websites with any of the available frameworks as I prefer to use things I've created myself in general. I have no problems in designing that framework when it comes to parts like the core structure, request handling, and so on but I'm getting stuck with designing the database interface for my modules. I've already thought about using the MVC pattern but thought that it would be a bit of a overkill. So the exact problem I'm facing is how my frameworks modules (viewCustomers could be a module, for example) should interact with the database. Is it a good idea to write SQL directly in PHP (mysql_query( 'SELECT firstname, lastname(.....))? How could I abstract a query like SELECT firstname, lastname FROM customers WHERE id=X Would MySQL helper functions like $this->db->get( array('firstname', 'lastname'), array('id'=>X) ) be a good idea? I suppose not because they actually make everything more complicated by requiring arrays to be created and passed. Is the Model pattern from MVC my only real option?

    Read the article

  • Auto-rotating freshly created interface

    - by zoul
    Hello! I have trouble with auto-rotating interfaces in my iPad app. I have a class called Switcher that observes the interface rotation notifications and when it receives one, it switches the view in window, a bit like this: - (void) orientationChanged: (NSNotification*) notice { UIDeviceOrientation newIO = [[UIDevice currentDevice] orientation]; UIViewController *newCtrl = /* something based on newIO */; [currentController.view removeFromSuperview]; // remove the old view [window addSubview newCtrl.view]; [self setCurrentController:newCtrl]; } The problem is that the new view does not auto-rotate. My auto-rotation callback in the controller class looks like this: - (BOOL) shouldAutorotateToInterfaceOrientation: (UIInterfaceOrientation) io { NSString *modes[] = {@"unknown", @"portrait", @"portrait down", @"landscape left", @"landscape right"}; NSLog(@"shouldAutorotateToInterfaceOrientation: %i (%@)", io, modes[io]); return YES; } But no matter how I rotate the device, I find the following in the log: shouldAutorotateToInterfaceOrientation: 1 (portrait) shouldAutorotateToInterfaceOrientation: 1 (portrait) …and the willRotateToInterfaceOrientation:duration: does not get called at all. Now what? The orientation changing is becoming my least favourite part of the iPhone SDK… (I can’t check the code on the device yet, could it be a bug in the simulator?) PS. The subscription code looks like this: [[UIDevice currentDevice] beginGeneratingDeviceOrientationNotifications]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(orientationChanged:) name:UIDeviceOrientationDidChangeNotification object:nil];

    Read the article

  • Action property of interface type

    - by Daniel
    Hi, guys. With my understading, the nature of a Action is that properties can be pushed w/ request parameter values. And, one wonderful feature is that Struts2 allows you to directly populate parameter values against Class type property ;) Assuming there exists a Action and property class as below, class Action extends ActionSupport { User user; @Action(value="hello" {@result=(.......)}) public void execute() { ........ } ..... public void setUser(User user) { this.user = user; } public User getUser() { return this.user; } } class User { String name; ..... public void setName(String name) { this.name = name; } public String getName() { return this.name; } } you could populate User class property by doing like this. http://...../hello.action?user.name=John or via jsp page Then, I realize that there are actually people make an Action property as a Interface type. My question is what is the reason behind this. If there is a sample code demonstrating it will be great. Thanks in advance!

    Read the article

  • How do I make a web interface for a socket server

    - by mgroat
    I've got a socket server running (it's something that's basically like a chat server). Users can telnet into it, but I'd like to make a web interface. This is the first time I've ever done something like this, so I'm not really sure where to start. A few thoughts I've had: Have some server-side Python (or PHP) on my webserver, which accesses the socket server. I think I know enough about sockets to have Python interact with the server, but how do I go about getting the website that the user sees to update in real time? Should I just have the website refresh few seconds? I would prefer to do things this way if I can figure out how. Write a Java applet that interacts with the socket server, and embed the applet in the website. I would have to re-learn a language that I haven't touched in years, but my main goal here is learning -- so that wouldn't be such a bad thing. The main problem I have with this is that it requires end users to have Java installed on their computers, which I'd rather not do. Is one of these two solutions the right way to go? Anybody know where I can find a good tutorial to get started? Edit: There's no real security concerns with exposing the server to the internet.

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >