Search Results

Search found 12753 results on 511 pages for 'small'.

Page 387/511 | < Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >

  • iPhone openGLES performance tuning

    - by genesys
    Hey there! I'm trying now for quite a while to optimize the framerate of my game without really making progress. I'm running on the newest iPhone SDK and have a iPhone 3G 3.1.2 device. I invoke arround 150 drawcalls, rendering about 1900 Triangles in total (all objects are textured using two texturelayers and multitexturing. most textures come from the same textureAtlasTexture stored in pvrtc 2bpp compressed texture). This renders on my phone at arround 30 fps, which appears to me to be way too low for only 1900 triangles. I tried many things to optimize the performance, including batching together the objects, transforming the vertices on the CPU and rendering them in a single drawcall. this yelds 8 drawcalls (as oposed to 150 drawcalls), but performance is about the same (fps drop to arround 26fps) I'm using 32byte vertices stored in an interleaved array (12bytes position, 12bytes normals, 8bytes uv). I'm rendering triangleLists and the vertices are ordered in TriStrip order. I did some profiling but I don't really know how to interprete it. instruments-sampling using Instruments and Sampling yelds this result: http://neo.cycovery.com/instruments_sampling.gif telling me that a lot of time is spent in "mach_msg_trap". I googled for it and it seems this function is called in order to wait for some other things. But wait for what?? instruments-openGL instruments with the openGL module yelds this result: http://neo.cycovery.com/intstruments_openglES_debug.gif but here i have really no idea what those numbers are telling me shark profiling: profiling with shark didn't tell me much either: http://neo.cycovery.com/shark_profile_release.gif the largest number is 10%, spent by DrawTriangles - and the whole rest is spent in very small percentage functions Can anyone tell me what else I could do in order to figure out the bottleneck and could help me to interprete those profiling information? Thanks a lot!

    Read the article

  • Large scale storage for incrementally-appended documents?

    - by Ben Dilts
    I need to store hundreds of thousands (right now, potentially many millions) of documents that start out empty and are appended to frequently, but never updated otherwise or deleted. These documents are not interrelated in any way, and just need to be accessed by some unique ID. Read accesses are some subset of the document, which almost always starts midway through at some indexed location (e.g. "document #4324319, save #53 to the end"). These documents start very small, at several KB. They typically reach a final size around 500KB, but many reach 10MB or more. I'm currently using MySQL (InnoDB) to store these documents. Each of the incremental saves is just dumped into one big table with the document ID it belongs to, so reading part of a document looks like "select * from saves where document_id=14 and save_id 53 order by save_id", then manually concatenating it all together in code. Ideally, I'd like the storage solution to be easily horizontally scalable, with redundancy across servers (e.g. each document stored on at least 3 nodes) with easy recovery of crashed servers. I've looked at CouchDB and MongoDB as possible replacements for MySQL, but I'm not sure that either of them make a whole lot of sense for this particular application, though I'm open to being convinced. Any input on a good storage solution?

    Read the article

  • C# File IO with Streams - Best Memory Buffer Size

    - by AJ
    Hi, I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance). Thanks in advance for any ideas, Adam

    Read the article

  • Base 128 or 256 Encoding for the Binary Lexical Octet Adhoc Transport Protocol?

    - by Randolpho
    I'm in the process of implementing a network driver for the Binary Lexical Octet Adhoc Transport (BLOAT) protocols in the hopes of replacing the TCP/UDP/IP stack with a much more flexible XML structure. BLOAT is detailed in RFC 3252, so if you're unfamiliar with the protocol I highly recommend you read the entire RFC before providing any comments. Don't worry, it's short and sweet; you might even enjoy it. Anyway, my problem is this: BLOAT requires that the payload be Base64 encoded which doesn't make sense to me. I mean, sure, it's the internet standard for binary payloads, but there are better, more efficient encodings available: Base128 and Base256, for example. That the RFC requires Base64 and doesn't allow for any other payload encoding really bothers me. To that end, I'm considering a small optional change to the protocol. Embrace and extend, right? Anyway, I'd like to modify the payload element to accept an encoding attribute, which can extend the encoding to Base128 or Base256, or even to other encodings I can't conceive of at the moment. If the encoding attribute isn't present, Base64 would be assumed. So my question is this: should I? I mean, BLOAT is an accepted standard, even if it isn't exactly omnipresent. If I make this change, will there be compatibility issues? I don't foresee any, but perhaps you, oh great Stack Overflow Community, can? If I do implement this change, should I contact the original RFC author? Should I offer a supplemental RFC?

    Read the article

  • Informix, NHibernate, TransactionScope interaction difficulties

    - by John Prideaux
    I have a small program that is trying to wrap an NHibernate insert into an Informix database in a TransactionScope object using the Informix .NET Provider. I am getting the error specified below. The code without the TransactionScope object works -- including when the insert is wrapped in an NHibernate session transaction. Any ideas on what the problem is? BTW, without the EnterpriseServicesInterop, the Informix .NET Provider will not participate in a TransactionScope transaction (verified without NHibernate involved). Code Snippet: public static void TestTScope() { Employee johnp = new Employee { name = "John Prideaux" }; using (TransactionScope tscope = new TransactionScope( TransactionScopeOption.Required, new TransactionOptions() { Timeout = new TimeSpan(0, 1, 0), IsolationLevel = IsolationLevel.ReadCommitted }, EnterpriseServicesInteropOption.Full)) { using (ISession session = OpenSession()) { session.Save(johnp); Console.WriteLine("Saved John to the database"); } } Console.WriteLine("Transaction should be rolled back"); } static ISession OpenSession() { if (factory == null) { Configuration c = new Configuration(); c.AddAssembly(Assembly.GetCallingAssembly()); factory = c.BuildSessionFactory(); } return factory.OpenSession(); } static ISessionFactory factory; Stack Trace: NHibernate.ADOException was unhandled Message="Could not close IBM.Data.Informix.IfxConnection connection" Source="NHibernate" StackTrace: at NHibernate.Connection.ConnectionProvider.CloseConnection(IDbConnection conn) at NHibernate.Connection.DriverConnectionProvider.CloseConnection(IDbConnection conn) at NHibernate.Tool.hbm2ddl.SuppliedConnectionProviderConnectionHelper.Release() at NHibernate.Tool.hbm2ddl.SchemaMetadataUpdater.GetReservedWords(Dialect dialect, IConnectionHelper connectionHelper) at NHibernate.Tool.hbm2ddl.SchemaMetadataUpdater.Update(ISessionFactory sessionFactory) at NHibernate.Impl.SessionFactoryImpl..ctor(Configuration cfg, IMapping mapping, Settings settings, EventListeners listeners) at NHibernate.Cfg.Configuration.BuildSessionFactory() at HelloNHibernate.Employee.OpenSession() in D:\Development\ScratchProject\HelloNHibernate\Employee.cs:line 73 at HelloNHibernate.Employee.TestTScope() in D:\Development\ScratchProject\HelloNHibernate\Employee.cs:line 53 at HelloNHibernate.Program.Main(String[] args) in D:\Development\ScratchProject\HelloNHibernate\Program.cs:line 19 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: IBM.Data.Informix.IfxException Message="ERROR - no error information available" Source="IBM.Data.Informix" ErrorCode=-2147467259 StackTrace: at IBM.Data.Informix.IfxConnection.HandleError(IntPtr hHandle, SQL_HANDLE hType, RETCODE retcode) at IBM.Data.Informix.IfxConnection.DisposeClose() at IBM.Data.Informix.IfxConnection.Close() at NHibernate.Connection.ConnectionProvider.CloseConnection(IDbConnection conn) InnerException:

    Read the article

  • iPhone SDK 3.2 Universal App issue with .xib files

    - by Wesley
    Hello! So I am finding this process of universalizing my iPhone app to be a big headache! Am I alone in this? I sure hope not. Anyway, my question is regarding the .xib files for my universal application. I had my iPhone OS 3.1 running app all ready to make the universal switch. I went up to Project/Upgrade Current Target for iPad/Universal Application and it supposedly made my app have all the necessary iPad settings... So when I went to test it in 3.2 SDk, the screen was big, meaning the toolbar was sized correctly for the iPad, but the image that was being displayed was for the OS 3.1, meaning it was way small. So I then went to the iPad Source folder, changed the name of my MainViewController.xib file to MainViewController-iPad.xib, and inserted the bigger image I had prepared for the iPad, and it still didn't work correctly. Then, I went into my MainViewController.m file and changed the nib reference from MainViewController to MainViewController-iPad, and it worked! My only concern is that being that I had to "hard-code" it in, or force it to read from my -iPad file, is that going to present issues for the OS3.1 version? I can't go back and test the 3.1 version now for some reason, the option was removed from the Active SDK menu... If there is anyone out there that has experienced this, or has insight into what I am doing wrong, your help would be greatly appreciated. Thank you!

    Read the article

  • Could CouchDB benefit significantly from the use of BERT instead of JSON?

    - by Victor Rodrigues
    I appreciate a lot CouchDB attempt to use universal web formats in everything it does: RESTFUL HTTP methods in every interaction, JSON objects, javascript code to customize database and documents. CouchDB seems to scale pretty well, but the individual cost to make a request usually makes 'relational' people afraid of. Many small business applications should deal with only one machine and that's all. In this case the scalability talk doesn't say too much, we need more performance per request, or people will not use it. BERT (Binary ERlang Term http://bert-rpc.org/ ) has proven to be a faster and lighter format than JSON and it is native for Erlang, the language in which CouchDB is written. Could we benefit from that, using BERT documents instead of JSON ones? I'm not saying just for retrieving in views, but for everything CouchDB does, including syncing. And, as a consequence of it, use Erlang functions instead of javascript ones. This would modify some original CouchDB principles, because today it is very web oriented. Considering I imagine few people would make their database API public and usually its data is accessed by the users through an application, it would be a good deal to have the ability to configure CouchDB for working faster. HTTP+JSON calls could still be handled by CouchDB, considering an extra cost in these cases because of parsing.

    Read the article

  • Get JVM to grow memory demand as needed up to size of VM limit?

    - by Ira Baxter
    We ship a Java application whose memory demand can vary quite a lot depending on the size of the data it is processing. If you don't set the max VM (virtual memory) size, quite often the JVM quits with an GC failure on big data. What we'd like to see, is the JVM requesting more memory, as GC fails to provide enough, until the total available VM is exhausted. e.g., start with 128Mb, and increase geometrically (or some other step) whenever the GC failed. The JVM ("Java") command line allows explicit setting of max VM sizes (various -Xm* commands), and you'd think that would be designed to be adequate. We try to do this in a .cmd file that we ship with the application. But if you pick any specific number, you get one of two bad behaviors: 1) if your number is small enough to work on most target systems (e.g., 1Gb), it isn't big enough for big data, or 2) if you make it very large, the JVM refuses to run on those systems whose actual VM is smaller than specified. How does one set up Java to use the available VM when needed, without knowing that number in advance, and without grabbing it all on startup?

    Read the article

  • Python for a hobbyist programmer ( a few questions)

    - by Matt
    I'm a hobbyist programmer (only in TI-Basic before now), and after much, much, much debating with myself, I've decided to learn Python. I don't have a ton of free time to teach myself a hundred languages and all programming I do will be for personal use or for distributing to people who need them, so I decided that I needed one good, strong language to be good at. My questions: Is python powerful enough to handle most things that a typical programmer might do in his off-time? I have in mind things like complex stat generators based on user input for tabletop games, making small games, automate install processes, and build interactive websites, but probably a hundred things along those lines Does python handle networking tasks fairly well? Can python source be obscufated (mispelled I think), or is it going to be open-source by nature? The reason I ask this is because if I make something cool and distribute it, I don't want some idiot script kiddie to edit his own name in and say he wrote it And how popular is python, compared to other languages. Ideally, my language would be good and useful with help found online without extreme difficulty, but not so common that every idiot with computer knows python. I like the idea of knowing a slightly obscure language. Thanks a ton for any help you can provide.

    Read the article

  • PyQt and unittest - how to handle signals and slots

    - by Einar
    Hello, some small application I'm developing uses a module I have written to check certain web services via a REST API. I've been trying to add unit tests to it so I don't break stuff, and I stumbled upon a problem. I use a lot of signal-slot connections to perform operations asynchronously. For example a typical test would be (pseudo-Python), with postDataDownloaded as a signal: def testConnection(self): "Test connection and posts retrieved" def length_test(): self.assertEqual(len(self.client.post_data), 5) self.client.postDataReady.connect(length_test) self.client.get_post_list(limit=5) Now, unittest will report this test to be "ok" when running, regardless of the result (as another slot is being called), even if asserts fail (I will get an unhandled AssertionError). Example when deliberatiely making the test fail: Test connection and posts retrieved ... ok [... more tests...] OK Traceback (most recent call last): [...] AssertionError: 4 != 5 The slot inside the test is merely an experiment: I get the same results if it's outside (instance method). I also have to add that the various methods I'm calling all make HTTP requests, which means they take a bit of time (I need to mock the request - in the mean time I'm using SimpleHTTPServer to fake the connections and give them proper data). Is there a way around this problem?

    Read the article

  • able to get the events in fullcalender, but not able to sync these events with time

    - by Ubaid
    i just loved the fullcalendar and wanted to implement it in a small application, everythin worked OK. i am able to get the events from my database through json to the front end. but all events are being listed as "ALL-DAY" events itself. not able to figure out why.. here is the screenshot for the same. any ideas what is going wrong..? i am using asp.net and c#. i have already tried sending the start and end dates in the ToString(), ToShortDateString(), ToString("s"), ToLongDateString(), ToUniversalTime(). nothing seems to working for me at the moment. i tried hard coding and sendin the data too. sample json of my data [{ "id": "2", "title": "Event2", "start": "1274171700", "end": "1274175600" }, { "id": "1", "title": "Event1", "start": "5/18/2010 16:30:00", "end": "5/18/2010 19:30:00" }, { "id": "3", "title": "Event3", "start": "5/18/2010 2:05:00 PM", "end": "5/18/2010 3:10:00 PM" }, { "id": "4", "title": "Event4", "start": "5/18/2010", "end": "5/18/2010" }, { "id": "5", "title": "Event5", "start": "2010-05-18T14:05:00", "end": "2010-05-18T15:10:00" }] all data above has different formats of dates, and at the moment nothing seems to be working. fullcalender accepts the day part fine, but not the time part. not sure why. can anybody help?

    Read the article

  • SoundManager2 has irregular latency

    - by Stefan Monov
    I'm playing some notes at regular intervals. Each one is delayed by a random number of milliseconds, creating a jarring irregular effect. How do I fix it? Note: I'm OK with some latency, just as long as it's consistent. Answers of the type "implement your own small SoundManager2 replacement, optimized for timing-sensitive playback" are OK, if you know how to do that :) but I'm trying to avoid rewriting my whole app in flash for now. For an example of app with zero audible latency see the flash-based ToneMatrix. Testcase (see it here live or get it in an zip): <head> <title></title> <script type="text/javascript" src="http://www.schillmania.com/projects/soundmanager2/script/soundmanager2.js"> </script> <script type="text/javascript"> soundManager.url = '.' soundManager.flashVersion = 9 soundManager.useHighPerformance = true soundManager.useFastPolling = true soundManager.autoLoad = true function recur(func, delay) { window.setTimeout(function() { recur(func, delay); func(); }, delay) } soundManager.onload = function() { var sound = soundManager.createSound("test", "test.mp3") recur(function() { sound.play() }, 300) } </script> </head> <body> </body> </html>

    Read the article

  • Worse is better. Is there an example?

    - by J.F. Sebastian
    Is there a widely-used algorithm that has time complexity worse than that of another known algorithm but it is a better choice in all practical situations (worse complexity but better otherwise)? An acceptable answer might be in a form: There are algorithms A and B that have O(N**2) and O(N) time complexity correspondingly, but B has such a big constant that it has no advantages over A for inputs less then a number of atoms in the Universe. Examples highlights from the answers: Simplex algorithm -- worst-case is exponential time -- vs. known polynomial-time algorithms for convex optimization problems. A naive median of medians algorithm -- worst-case O(N**2) vs. known O(N) algorithm. Backtracking regex engines -- worst-case exponential vs. O(N) Thompson NFA -based engines. All these examples exploit worst-case vs. average scenarios. Are there examples that do not rely on the difference between the worst case vs. average case scenario? Related: The Rise of ``Worse is Better''. (For the purpose of this question the "Worse is Better" phrase is used in a narrower (namely -- algorithmic time-complexity) sense than in the article) Python's Design Philosophy: The ABC group strived for perfection. For example, they used tree-based data structure algorithms that were proven to be optimal for asymptotically large collections (but were not so great for small collections). This example would be the answer if there were no computers capable of storing these large collections (in other words large is not large enough in this case). Coppersmith–Winograd algorithm for square matrix multiplication is a good example (it is the fastest (2008) but it is inferior to worse algorithms). Any others? From the wikipedia article: "It is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware (Robinson 2005)."

    Read the article

  • System.Threading.Timer keep reference to it.

    - by Daniel Bryars
    According to [http://msdn.microsoft.com/en-us/library/system.threading.timer.aspx][1] you need to keep a reference to a System.Threading.Timer to prevent it from being disposed. I've got a method like this: private void Delay(Action action, Int32 ms) { if (ms <= 0) { action(); } System.Threading.Timer timer = new System.Threading.Timer( (o) => action(), null, ms, System.Threading.Timeout.Infinite); } Which I don't think keeps a reference to the timer, I've not seen any problems so far, but that's probably because the delay periods used have been pretty small. Is the code above wrong? And if it is, how to I keep a reference to the Timer? I'm thinking something like this might work: class timerstate { internal volatile System.Threading.Timer Timer; }; private void Delay2(Action action, Int32 ms) { if (ms <= 0) { action(); } timerstate state = new timerstate(); lock (state) { state.Timer = new System.Threading.Timer( (o) => { lock (o) { action(); ((timerstate)o).Timer.Dispose(); } }, state, ms, System.Threading.Timeout.Infinite); } The locking business is so I can get the timer into the timerstate class before the delegate gets invoked. It all looks a little clunky to me. Perhaps I should regard the chance of the timer firing before it's finished constructing and assigned to the property in the timerstace instance as negligible and leave the locking out.

    Read the article

  • Google App Engine application instance recycling and response times...

    - by Konrad
    Hi, I posted this on GAE for Java group, but I hope to get some answers here quicker :) I decided to do some long-run performance tests on my application. I created some small client hitting app every 5-30 minutes and I run 3-5 of threads with such client. I noticed huge differenced in response times and started to investigate issue. I found reason very quick. I am experiencing same issues as described in following topics: Uneven response time between connection to server to first byte sent Application instances seem to be too aggressively recycled Getting 'Request was aborted after waiting too long to attempt to service your request.' after application idle I am using Springframework, it tkes around 18-20s to start app instance, which is causing response times to take from 1s (when requests hits running app - very rare) to 22s when fresh application is created. Is there any solution for this? I was thinking about creating most basic servlet performing critical tasks (serving API call) and leave UI as is. But then I would loose all benefits of Springframework. Is there any solution for this? After solving (hacking) numerous constrains of App Engine which I hit while developing my app that is the one I think will make me move out of App Engine... that's simply to much to all the time think how to win with GAE problems than how to solve my application problems... Any help? Regards Konrad

    Read the article

  • Android: ListActivity design - changing the content of the List Adapter

    - by Rob
    Hi, I would like to write a rather simple content application which displays a list of textual items (along with a small pic). I have a standard menu in which each menu item represents a different category of textual items (news, sports, leisure etc.). Pressing a menu item will display a list of textual items of this category. Now, having a separate ListActivity for each category seems like an overkill (or does it?..) Naturally, it makes much more sense to use one ListActivity and replace the data of its adapter when each category is loaded. My concern is when "back" is pressed. The adapter is loaded with items of the current category and now I need to display list of the previous category (and enable clicking on list items too...). Since I have only one activity - I thought of backup and load mechanism in onPause() and onResume() functions as well as making some distinction whether these function are invoked as a result of a "new" event (menu item selected) or by a "back" press. This seems very cumbersome for such a trivial usage... Am I missing something here? Thanks, Rob

    Read the article

  • nHibernate strategies in a web farm

    - by Pete Nelson
    Our current project at work is a new MVC web site that will use a WCF service primarily to access a 3rd party billing system via a web service as well as a small SQL database for user personalization. The WCF service uses nHibernate for the SQL database. We'd like to implement some sort of web farm for load balancing as well as failover and maintenance. I'm trying to decide the best way to handle nHibernate's caching and database concurrency if there are multiple WCF services running. Some scenarios I've been thinking about... 1) Multiple IIS servers, one WCF server. With this setup, the WCF server would be a single point of failure, but there would be no issues with nHibernate caching or database concurrency. 2) Multiple IIS servers, each with it's own WCF service. This removes a single point of failure, but now nHibernate on one machine would not know about database changes done by another machine. Some solutions to number 2 would be to use an IStatelessSession so we're not doing any caching and nHibernate is always fetching directly from the database. This might be the most feasible as our personalization database has very few objects in it. I'm also considering a 2nd-level cache such as memcached or Velocity, but it may be overkill for this system. I'm putting this out there to see if anyone has experience doing this sort of architecture and to get some ideas for a solution. Thanks!

    Read the article

  • How to prevent latex memory overflow

    - by drasto
    I've got a latex macro that makes small pictures. In that picture I need to draw area. Borders of that area are quadratic bezier curves and that area is to be filled. I did not know how to do it so currently I'm "filling" the area by drawing a plenty of bezier curves inside it... This slows down typeseting and when a macro is used multiple times (so tex is drawing really a lot of quadratic bezier curves) it produces following error: ! TeX capacity exceeded, sorry [main memory size=3000000]. How can I prevent this error ? (by freeing memory after macro or such...) Or even better how do I fill the area determined by two quadratic bezier curves? Code that produces error: \usepackage{forloop} \usepackage{picture} \usepackage{eepic} ... \linethickness{\lineThickness\unitlength}% \forloop[\lineThickness]{cy}{\cymin}{\value{cy} < \cymax}{% \qbezier(\ax, \ay)(\cx, \value{cy})(\bx, \by)% }% Here are some example values for variables: \setlength{\unitlength}{0.01pt} \lineThickness=20 %cy is just a counter - inital value is not important \cymin=450 \cymax=900 %from following only the difference between \ax and \bx is important \ax=0 \ay=0 \bx=550 \by=0 Note: To reproduce the error this code have to execute approximately 150 times (could be more depending on your latex memory settings). Thanks a lot for any help

    Read the article

  • ASP.NET Projects with Two Versions of AjaxControlToolkit

    - by Chris
    In my Solution I have three projects. Project A is a web app and uses version 1.0.10618.0 of the AjaxControlToolkit. I would love to upgrade it to the latest but unfortunately any newer release completely breaks a portion of my site. Project B is also a web app but is a completely new software product and so it uses (and relies on) the latest version of the AjaxControlToolkit. Everything works great. Thought A and B are totally different products they use the same DB and rely on the same ClassLibrary. Project C is a small web app that ties A and B together with certain functionality like forgot password pages. The pages in this app reside in a virtual directory of both A and B. Project C currently uses v1.0.10618.0 of the toolkit so it works with Project A but it fails with project B because the manifest definitions of the dlls don't match (to be expected). What I've done is built a new dll of the toolkit and changed the assembly and namespace to AjaxControlToolkit_v1 and then changed all v1 references to this new dll so the old version and new versions can sit side by side in the same bin folder and nobody complains. I then changed my web.config controls tag to look like this: <add tagPrefix="ajaxToolkit" namespace="AjaxControlToolkit_v1" assembly="AjaxControlToolkit_v1, Version=1.0.10618.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e"/> This all works except I get a runtime error of: Unknown server tag 'ajaxToolkit:AnimationExtender'. I can't figure out why this is, any ideas on how to remedy it?

    Read the article

  • Stretch Background Image Resizes With Browser Windows

    - by user241673
    I am trying to replicate the image resizing found at http://devkick.com/lab/fsgallery/ but with the code I have below, it is not working properly. When resizing the browser window to have small width and big height, white space shows up at the bottom of the page. feel free to see it & edit at http://jsbin.com/ifolu3 The CSS: html, body {width:100%; height:100%; overflow:hidden;} div.bg {position:absolute; width:200%; height:200%; top:-50%; left:-50%;} img.bg {min-height:50%; min-width:50%; margin:0 auto; display:block;} The JS/jQuery: $(window).resize(function(){ var ratio = Math.max($(window).width()/$('img.bg').width(),$(window).height()/$('img.bg').height()); if ($(window).width() $(window).height()) { $('img.bg').css({width:image.width()*ratio,height:'auto'}); } else { $('img.bg').css({width:'auto',height:image.height()*ratio}); } }); The HTML - (sorry for the formatting, had trouble getting "<" to show) [body] [div class="bg"] [img class="bg" src="bg.jpg" /] [/div] [/body]

    Read the article

  • Prism - loading modules causes activation error

    - by Noich
    I've created a small Prism application (using MEF) with the following parts: - Infrastructure project, that contains a global file of region names. - Main project, that has 2 buttons to load other modules. - ModulesInterfaces project, where I've defined an IDataAccess interface for my DataAccess module. - DataAccess project, where there are several files but one main DataAccess class that implements both IDataAccess and IModule. When the user click a button, I want to show him a list of employees, so I need the DataAccess module. My code is as follows: var namesView = new EmployeesListView(); var namesViewModel = new EmployeesListViewModel(employeeRepository); namesView.DataContext = namesViewModel; namesViewModel.EmployeeSelected += EmployeeSelected; LocateManager(); manager.Regions["ContentRegion"].Add(new NoEmployeeView(), "noEmployee"); manager.Regions["NavigationRegion"].Add(namesView); I get an exception in my code when I'm trying to get DataAccess as a service (via ServiceLocator). I'm not sure if that's the right way to go, but trying to get it from the ModuleCatalog (as a type IDataAccess returns an empty enumeration). I'd appreciate some directions. Thanks. The problematic code line: object instance = ServiceLocator.Current.GetInstance(typeof(DataAccess.DataAccess)); The exact exception: Activation error occured while trying to get instance of type DataAccess, key "" The inner exception is null, the stack trace: at Microsoft.Practices.Prism.MefExtensions.MefServiceLocatorAdapter.DoGetInstance(Type serviceType, String key) in c:\release\WorkingDir\PrismLibraryBuild\PrismLibrary\Desktop\Prism.MefExtensions\MefServiceLocatorAdapter.cs:line 76 at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance(Type serviceType, String key) Edit The same exception is thrown for: container = (CompositionContainer)ServiceLocator.Current.GetInstance(typeof(CompositionContainer));

    Read the article

  • js popup window to play .flv flash video using jwplayer.swf

    - by Mike Trader
    js popup window to play .flv using jwplayer.swf I would like to adjust this code so that it does not crash the browser when expanded to full screen and the popup closes when it looses focus <html> <head> <title>Popup Example</title> <center> <div class="yt_container"> <div id="yt_the_video" class="yt_video_full"> <script type="text/javascript" src="swfobject.js"></script> <script type="text/javascript"> var s1 = new SWFObject("player.swf","ply","640","500","9","#FFFFFF"); s1.addParam("allowfullscreen","true"); s1.addParam("allownetworking","all"); s1.addParam("allowscriptaccess","always"); s1.addParam("flashvars",'&file=GJClip.flv&autostart=true'); </script> </head> <body &#10;&#10;bgcolor="#CCCFFF"> <img alt="GJ" src="GJPlay.jpg"&#10;width="80" height="60" onClick="s1.write('yt_the_video');"&#10;&#10;</body> </html> I have to have many small thumbnails on a page and each one needs to open up to a full size (640x480) video with controls when clicked. Having looked at Shdowbox (dims web page behind it, not allowed to do that) and lightbox which I cannot get to work at all, I am down to a home gown solution, which I prefer anyway.

    Read the article

  • ws-xmlrpc claims error on part of service but other clients work fine

    - by mludd
    I've been trying to connect to an rTorrent instance using ws-xmlrpc and it just isn't going too well. Now, the URL I'm using is the same that I've been using when making sure that rTorrent's XMLRPC support is fine (which it appears to be since both a native OS X application and a small python script I threw together appear to be able to talk to it just fine without any errors). However, when I try using ws-xmlrpc to connect I get org.apache.xmlrpc.XmlRpcException: Failed to create input stream: Unexpected end of file from serverat the top of my stack trace followed by a bunch of steps down to: java.net.SocketException: Unexpected end of file from server at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:769) ... So basically, it seems that ws-xmlrpc is convinced that the reply from rTorrent is malformed somehow but other libraries apparently have no problem with it. The code I use to call rTorrent is: private Object callRTorrent(String command, Object[] params) { Object result = null; try { // xmlrpcclient is an XmlRpcClient object and is instantied in // the class constructor result = xmlrpcclient.execute(command, params); } catch(XmlRpcException xre) { System.out.println("Unable to execute method "+command); xre.printStackTrace(); } return result; } With command set to system.listMethodsand params set to an empty Object[]. From reading documentation and googling my conclusion is that I'm not doing anything obviously wrong and this problem doesn't appear to be common, so does anyone have a clue what's going on here?

    Read the article

  • Configuring ASP.NET MVC ActionLink format with GoDaddy shared hosting

    - by Maxim Z.
    Background I have a GoDaddy shared Windows hosting plan and I'm running into a small issue with multiple domains. Many people have previously reported such an issue, but I am not interested in trying to resolve that problem altogether; all I want to accomplish is to change the format of my ActionLinks. Issue Let's say the domain that is mapped to my root hosting directory is example.com. GoDaddy forces mapping of other domains to subdirectories of the root. For example, my second domain, example1.com, is mapped to example.com/example1. I uploaded my ASP.NET MVC site to such a subdirectory, only to find that ActionLinks that are for navigation have the following format: http://example1.com/example1/Controller/Action In other words, even when I use the domain that is mapped to the subdirectory, the subdirectory is still used in the URL. However, I noticed that I can also access the same path by going to: http://example1.com/Controller/Action (leaving out the subdirectory) What I want to achieve I want to have my ActionLinks automatically drop the subdirectory, as it is not required. Is this possible without changing the ActionLinks into plain-old URLs?

    Read the article

  • Why I am getting a Heap Corruption Error?

    - by vaidya.atul
    I am new to C++. I am getting HEAP CORRUPTION ERROR. Any help will be highly appreciated. Below is my code class CEntity { //some member variables CEntity(string section1,string section2); CEntity(); virtual ~CEntity(); //pure virtual function .. virtual CEntity* create()const =0; }; I derive CLine from CEntity as below class CLine:public CEntity { // Again some variables ... // Constructor and destructor CLine(string section1,string section2); CLine(); ~CLine(); CLine* Create() const; } // CLine Implementation CLine::CLine(string section1,string section2):CEntity(section1,section2){}; CLine::CLine(); CLine* CLine::create()const{return new CLine();} I have another class CReader which uses CLine object and populates it in a multimap as below class CReader { public: CReader(); ~CReader(); multimap<int,CEntity*>m_data_vs_entity; }; //CReader Implementation CReader::CReader() { m_data_vs_entity.clear(); }; CReader::~CReader() { multimap<int,CEntity*>::iterator iter; for(iter = m_data_vs_entity.begin();iter!=m_data_vs_entity.end();iter++) { CEntity* current_entity = iter->second; if(current_entity) delete current_entity; } m_data_vs_entity.clear(); } I am reading the data from a file and then populating the CLine Class.The map gets populated in a function of CReader class. Since CEntity has a virtual destructor, I hope the piece of code in CReader's destructor should work. In fact, it does work for small files but I get HEAP CORRUPTION ERROR while working with bigger files. If there is something fundamentally wrong, then, please help me find it, as I have been scratching my head for quit some time now. Thanks in advance and awaiting reply, Regards, Atul

    Read the article

< Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >