Search Results

Search found 8739 results on 350 pages for 'core xii'.

Page 291/350 | < Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >

  • Is there a way to toggle bluetooth and/or wifi on and off programatically in iOS?

    - by Andy W
    I am looking for an easy way to toggle both bluetooth and wifi between on and off states on iOS 4.x devices (iPhone and iPad). I am constantly toggling these functions as I move between different locations and usage scenarios, and right now it takes multiple taps and visits to the Settings App. I am looking to create a simple App, that lives on Springboard, that I can just tap and it will turn off the wifi if it's on, and vice versa, then immediately quit. Similarly with an App for toggling bluetooth’s state. I have the developer SDK, and am comfortable in Xcode and with iOS development, so am happy to write the required Xcode to create the App. I am just at a loss as to which API, private or not, has the required functionality to simply toggle the state of these facilities. Because this is scratching a very personal itch, I have no intent to try and sell the App or get it up on the App store, so conforming with App guidelines on API usage is a non-issue. What I don’t want to do is jailbreak the devices, as I want to keep the core software as shipped. Can anyone point me at some sample code or more info on achieving this goal, as my Google-fu is letting me down, and if the information is out there for 4.x devices I just can’t find it.

    Read the article

  • git crlf configuration in mixed environment

    - by Jonas Byström
    I'm running a mixed environment, and keep a central, bare repository where I pull and push most of my stuff. This centralized repository runs on Linux, and I check out to Windows XP/7, Mac and Linux. In all repositories I put the following line in my .git/config: [core] autocrlf = true I don't have the flag safecrlf=true anywhere. First time when I modify stuff on my one Windows machine (XP) there is no problem and when I look at the diff, it looks fine. But when I do the same on the other Windows machine (7), all lines are shown as changed but local line endings are \r\n as expected (when checked in a hex editor). The same applies to a MacOSX can. Sometimes I get the feeling that the different systems wrestle on line endings, but I can't be sure (I'm loosing track of all the times I change specific files). I didn't use to have the autocrlf set, but set the flag many months back. Could that be causing my current problems? Do I need to clone everything again to loose some old baggage? Or are there other things that needs configuring too? I tried git checkout -- . about a million times, but with no success.

    Read the article

  • Django: DatabaseLockError exception with Djapian

    - by jul
    Hi, I've got the exception shown below when executing indexer.update(). I have no idea about what to do: it used to work and now index database seems "locked". Anybody can help? Thanks Environment: Request Method: POST Request URL: http://piem.org:8000/restaurant/add/ Django Version: 1.1.1 Python Version: 2.5.2 Installed Applications: ['django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.comments', 'django.contrib.sites', 'django.contrib.admin', 'registration', 'djapian', 'resto', 'multilingual'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.middleware.locale.LocaleMiddleware', 'multilingual.middleware.DefaultLanguageMiddleware') Traceback: File "/var/lib/python-support/python2.5/django/core/handlers/base.py" in get_response 92. response = callback(request, *callback_args, **callback_kwargs) File "/home/jul/atable/../atable/resto/views.py" in addRestaurant 639. Restaurant.indexer.update() File "/home/jul/python-modules/Djapian-2.3.1-py2.5.egg/djapian/indexer.py" in update 181. database = self._db.open(write=True) File "/home/jul/python-modules/Djapian-2.3.1-py2.5.egg/djapian/database.py" in open 20. xapian.DB_CREATE_OR_OPEN, File "/usr/lib/python2.5/site-packages/xapian.py" in __init__ 2804. _xapian.WritableDatabase_swiginit(self,_xapian.new_WritableDatabase(*args)) Exception Type: DatabaseLockError at /restaurant/add/ Exception Value: Unable to acquire database write lock on /home/jul/atable /djapian_spaces/resto/restaurant/resto.index.restaurantindexer: already locked

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • Parallel version of loop not faster than serial version

    - by Il-Bhima
    I'm writing a program in C++ to perform a simulation of particular system. For each timestep, the biggest part of the execution is taking up by a single loop. Fortunately this is embarassingly parallel, so I decided to use Boost Threads to parallelize it (I'm running on a 2 core machine). I would expect at speedup close to 2 times the serial version, since there is no locking. However I am finding that there is no speedup at all. I implemented the parallel version of the loop as follows: Wake up the two threads (they are blocked on a barrier). Each thread then performs the following: Atomically fetch and increment a global counter. Retrieve the particle with that index. Perform the computation on that particle, storing the result in a separate array Wait on a job finished barrier The main thread waits on the job finished barrier. I used this approach since it should provide good load balancing (since each computation may take differing amounts of time). I am really curious as to what could possibly cause this slowdown. I always read that atomic variables are fast, but now I'm starting to wonder whether they have their performance costs. If anybody has some ideas what to look for or any hints I would really appreciate it. I've been bashing my head on it for a week, and profiling has not revealed much.

    Read the article

  • WPF drawing performance with large numbers of geometries

    - by MyFaJoArCo
    Hello, I have problems with WPF drawing performance. There are a lot of small EllipseGeometry objects (1024 ellipses, for example), which are added to three separate GeometryGroups with different foreground brushes. After, I render it all on simple Image control. Code: DrawingGroup tmpDrawing = new DrawingGroup(); GeometryGroup onGroup = new GeometryGroup(); GeometryGroup offGroup = new GeometryGroup(); GeometryGroup disabledGroup = new GeometryGroup(); for (int x = 0; x < DisplayWidth; ++x) { for (int y = 0; y < DisplayHeight; ++y) { if (States[x, y] == true) onGroup.Children.Add(new EllipseGeometry(new Rect((double)x * EDGE, (double)y * EDGE, EDGE, EDGE))); else if (States[x, y] == false) offGroup.Children.Add(new EllipseGeometry(new Rect((double)x * EDGE, (double)y * EDGE, EDGE, EDGE))); else disabledGroup.Children.Add(new EllipseGeometry(new Rect((double)x * EDGE, (double)y * EDGE, EDGE, EDGE))); } } tmpDrawing.Children.Add(new GeometryDrawing(OnBrush, null, onGroup)); tmpDrawing.Children.Add(new GeometryDrawing(OffBrush, null, offGroup)); tmpDrawing.Children.Add(new GeometryDrawing(DisabledBrush, null, disabledGroup)); DisplayImage.Source = new DrawingImage(tmpDrawing); It works fine, but takes too much time - 0.5s on Core 2 Quad, 2s on Pentium 4. I need <0.1s everywhere. All Ellipses, how you can see, are equal. Background of control, where is my DisplayImage, is solid (black, for example), so we can use this fact. I tried to use 1024 Ellipse elements instead of Image with EllipseGeometries, and it was working much faster (~0.5s), but not enough. How to speed up it? Regards, Oleg Eremeev P.S. Sorry for my English.

    Read the article

  • Career Choice in JEE, are EJBs standard?

    - by John Baker
    I have chosen to go the JEE route for a career path but I have been having a hard time determining which core technologies that I need to be most familiar with. I'm at the point I can write web apps without a problem with JSP's and regular servlets using JDBC or some basic Hibernate stuff (I know, HTML, CSS and have used MVC extensively on a number of different platforms). What I'm trying to find out is if there is some standard as far as J2EE technologies go. When I look at most of the Job listings, occasionally you will see someone mention Struts or Spring but rarely do I see any mention of EJB's. So my question is really, are EJB's basically required by most JEE employers? Or are most of them working with POJO's? Is it a mix? I have a hard time figuring out if I should put the majority of my time into Struts, Spring/Hibernate, EJB's, etc. And if I do need to master EJB's what version should I learn? 2.1 or 3.0. 3.0 has some obviously better features but I figure a lot of companies probably chose to write their apps in 2.1 just because it was the standard of the time and now migrating would be a big deal. Any advice on this is greatly appreciated.

    Read the article

  • jQuery update not replacing js files in Drupal 6.16.

    - by vr3690
    Hi, I am using jquery update in drupal 6.16 along with a lot of other modules. I am trying to use jquery ui 1.7.2 to render tabs. But unfortunately they don't work properly since jquery update is not replacing the jquery file (jquery 1.3.2). I checked the version using $.fn.jquery (in firebug) and got 1.2.6 (not 1.3.2 as required) as the result - and as expected the aggregated js file was using the 1.2.6 version of jquery (see source). earlier I had just replaced the core script files in /misc with the js files in sites/default/modules/jquery_update/replace folder (like you'd do in 5.x) and got the necessary result (i also renamed jquery.min.js to jquery.js ). now suddenly that stopped working after i upgraded to 6.x-2.0-alpha1 and also installed the mollom module. disabling/uninstalling mollom or down-grading jQuery update does not seem to help. the problem only occurs on the front page though. other content pages have jQuery 1.3.2 the problem can be seen here. So, basically, for some reason, jquery update is not replacing the jquery files (as it is supposed to) on the front page. and i cannot figure out why that happens. any ideas?

    Read the article

  • how to implement a really efficient bitvector sorting in python

    - by xiao
    Hello guys! Actually this is an interesting topic from programming pearls, sorting 10 digits telephone numbers in a limited memory with an efficient algorithm. You can find the whole story here What I am interested in is just how fast the implementation could be in python. I have done a naive implementation with the module bitvector. The code is as following: from BitVector import BitVector import timeit import random import time import sys def sort(input_li): return sorted(input_li) def vec_sort(input_li): bv = BitVector( size = len(input_li) ) for i in input_li: bv[i] = 1 res_li = [] for i in range(len(bv)): if bv[i]: res_li.append(i) return res_li if __name__ == "__main__": test_data = range(int(sys.argv[1])) print 'test_data size is:', sys.argv[1] random.shuffle(test_data) start = time.time() sort(test_data) elapsed = (time.time() - start) print "sort function takes " + str(elapsed) start = time.time() vec_sort(test_data) elapsed = (time.time() - start) print "sort function takes " + str(elapsed) start = time.time() vec_sort(test_data) elapsed = (time.time() - start) print "vec_sort function takes " + str(elapsed) I have tested from array size 100 to 10,000,000 in my macbook(2GHz Intel Core 2 Duo 2GB SDRAM), the result is as following: test_data size is: 1000 sort function takes 0.000274896621704 vec_sort function takes 0.00383687019348 test_data size is: 10000 sort function takes 0.00380706787109 vec_sort function takes 0.0371489524841 test_data size is: 100000 sort function takes 0.0520560741425 vec_sort function takes 0.374383926392 test_data size is: 1000000 sort function takes 0.867373943329 vec_sort function takes 3.80475401878 test_data size is: 10000000 sort function takes 12.9204008579 vec_sort function takes 38.8053860664 What disappoints me is that even when the test_data size is 100,000,000, the sort function is still faster than vec_sort. Is there any way to accelerate the vec_sort function?

    Read the article

  • Concept of WNDCLASSEX, good programming habits and WndProc for system classes

    - by luiscubal
    I understand that the Windows API uses "classes", relying to the WNDCLASS/WNDCLASSEX structures. I have successfully gone through windows API Hello World applications and understand that this class is used by our own windows, but also by Windows core controls, such as "EDIT", "BUTTON", etc. I also understand that it is somehow related to WndProc(it allows me to define a function for it) Although I can find documentation about this class, I can't find anything explaining the concept. So far, the only thing I found about it was this: A Window Class has NOTHING to do with C++ classes. Which really doesn't help(it tells me what it isn't but doesn't tellme what it is). In fact, this only confuses me more, since I'd be tempted to associate WNDCLASSEX to C++ classes and think that "WNDCLASSEX" represents a control type . So, my first question is What is it? In second place, I understand that one can define a WndProc in a class. However, a window can also get messages from the child controls(or windows, or whatever they are called in the Windows API). How can this be? Finally, when is it a good programming practise to define a new class? Per application(for the main frame), per frame, one per control I define(if I create my own progress bar class, for example)? I know Java/Swing, C#/Windows.Form, C/GTK+ and C++/wxWidgets, so I'll probably understand comparisons with these toolkits.

    Read the article

  • Customizing the TFS 2008 build sequence to avoid compilation and deploy SSRS

    - by Andrew
    I'm trying to create a CI process for SQL Server Reporting Services. I am fairly new to TFS but quite experienced with MSBuild. In the past I've used a combination of MSBuild with Team City so the whole build process is more or less custom. Here lies the start of my problems, as the solution I am deploying only contains Report Server projects (rds), no compilation is required. I thought that I would override the the first default task that TFS runs (EndToEndIteration) to override the default TFS build sequence and inject my own. The first snag that I have come across is that the build always fails, how can I set the status of the build to success? Currently the EndToEndIteration task is very light and only has a message. Is this the best method to create a custom build process in TFS where compilation is not required? Or should I use the default sequence and override one of the hook tasks mentioned in http://msdn.microsoft.com/en-us/library/aa337604%28VS.80%29.aspx (ie: AfterCompile) The core steps that I'd like to achieve are: Bundle the RDL and datasource files Connect to the host server to register/deploy the reports Re-apply any subscriptions that previously existed Run tests to verify the deployment succeeded and is returning results as expected I have found another article on Report services deployment: http://stackoverflow.com/questions/88710/reporting-services-deployment But it doesn't mention the best practice for customizing the standard build process. Any help would be appreciated.

    Read the article

  • Flex fixed and variable height - can it be set in markup?

    - by Prutswonder
    I've got the following Flex application markup: <app:MyApplicationClass xmlns:app="*" width="100%" height="100%" layout="vertical" horizontalScrollPolicy="off" verticalScrollPolicy="off"> <mx:VBox id="idPageContainer" width="100%" height="100%" verticalGap="0" horizontalScrollPolicy="off" verticalScrollPolicy="off"> <mx:HBox id="idTopContainer" width="100%" height="28" horizontalGap="2"> (top menu stuff goes here) </mx:HBox> <mx:HBox id="idBottomContainer" width="100%" height="100%" verticalScrollPolicy="off" clipContent="false"> (page stuff goes here) </mx:HBox> </mx:VBox> </app:MyApplicationClass> When I run it, it displays top panel with a fixed height, and a bottom panel with variable height. I expect the bottom panel's height to contain the remaining height, but it somehow overflows off-page. The only way I found to fix this height issue (so far) is to programmatically set the height to be fixed instead of variable: <mx:HBox id="idBottomContainer" width="100%" height="700" verticalScrollPolicy="off" clipContent="false"> (page stuff goes here) </mx:HBox> And code-behind: package { import mx.containers.HBox; import mx.core.Application; import mx.events.ResizeEvent; // (...) public class MyApplicationClass extends Application { public var idBottomContainer:HBox; // (...) private function ON_CreationComplete (event:FlexEvent) : void { // (...) addEventListener(ResizeEvent.RESIZE, ON_Resize); } private function ON_Resize (event:Event) : void { idBottomContainer.height = this.height - idTopContainer.height; } } } But this solution is too "dirty" and I'm looking for a more elegant way. Does anyone know an alternative?

    Read the article

  • Is WordPress MVC compliant?

    - by kovshenin
    Some people consider WordPress a blogging platform, some think of it as a CMS, some refer to WordPress as a development framework. Whichever it is, the question still remains. Is WordPress MVC compliant? I've read the forums and somebody asked about MVC about three years ago. There were some positive answers, and some negative ones. While nobody knows exactly what MVC is and everybody thinks of it in their own way, there's still a general concept that's present in all the discussions. I have little experience with MVC frameworks and there doesn't seem to be anything about the framework itself. Most of the MVC is done by the programmer, am I right? Now, going back to WordPress, could we consider the core rewrite engine (WP_Rewrite) the controller? Queries & plugin logic as the model? And themes as the view? Or am I getting it all wrong? Thanks ;)

    Read the article

  • What can cause my code to run slower when the server JIT is activated?

    - by durandai
    I am doing some optimizations on an MPEG decoder. To ensure my optimizations aren't breaking anything I have a test suite that benchmarks the entire codebase (both optimized and original) as well as verifying that they both produce identical results (basically just feeding a couple of different streams through the decoder and crc32 the outputs). When using the "-server" option with the Sun 1.6.0_18, the test suite runs about 12% slower on the optimized version after warmup (in comparison to the default "-client" setting), while the original codebase gains a good boost running about twice as fast as in client mode. While at first this seemed to be simply a warmup issue to me, I added a loop to repeat the entire test suite multiple times. Then execution times become constant for each pass starting at the 3rd iteration of the test, still the optimized version stays 12% slower than in the client mode. I am also pretty sure its not a garbage collection issue, since the code involves absolutely no object allocations after startup. The code consists mainly of some bit manipulation operations (stream decoding) and lots of basic floating math (generating PCM audio). The only JDK classes involved are ByteArrayInputStream (feeds the stream to the test and excluding disk IO from the tests) and CRC32 (to verify the result). I also observed the same behaviour with Sun JDK 1.7.0_b98 (only that ist 15% instead of 12% there). Oh, and the tests were all done on the same machine (single core) with no other applications running (WinXP). While there is some inevitable variation on the measured execution times (using System.nanoTime btw), the variation between different test runs with the same settings never exceeded 2%, usually less than 1% (after warmup), so I conclude the effect is real and not purely induced by the measuring mechanism/machine. Are there any known coding patterns that perform worse on the server JIT? Failing that, what options are available to "peek" under the hood and observe what the JIT is doing there?

    Read the article

  • Multiple SessionFactories in Windows Service with NHibernate

    - by Rob Taylor
    Hi all, I have a Webapp which connects to 2 DBs (one core, the other is a logging DB). I must now create a Windows service which will use the same business logic/Data access DLLs. However when I try to reference 2 session factories in the Service App and call the factory.GetCurrentSession() method, I get the error message "No session bound to current context". Does anyone have a suggestion about how this can be done? public class StaticSessionManager { public static readonly ISessionFactory SessionFactory; public static readonly ISessionFactory LoggingSessionFactory; static StaticSessionManager() { string fileName = System.Configuration.ConfigurationSettings.AppSettings["DefaultNHihbernateConfigFile"]; string executingPath = System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase); fileName = executingPath + "\\" + fileName; SessionFactory = cfg.Configure(fileName).BuildSessionFactory(); cfg = new Configuration(); fileName = System.Configuration.ConfigurationSettings.AppSettings["LoggingNHihbernateConfigFile"]; fileName = executingPath + "\\" + fileName; LoggingSessionFactory = cfg.Configure(fileName).BuildSessionFactory(); } } The configuration file has the setting: <property name="current_session_context_class">call</property> The service sets up the factories: private ISession _session = null; private ISession _loggingSession = null; private ISessionFactory _sessionFactory = StaticSessionManager.SessionFactory; private ISessionFactory _loggingSessionFactory = StaticSessionManager.LoggingSessionFactory; ... _sessionFactory = StaticSessionManager.SessionFactory; _loggingSessionFactory = StaticSessionManager.LoggingSessionFactory; _session = _sessionFactory.OpenSession(); NHibernate.Context.CurrentSessionContext.Bind(_session); _loggingSession = _loggingSessionFactory.OpenSession(); NHibernate.Context.CurrentSessionContext.Bind(_loggingSession); So finally, I try to call the correct factory by: ISession session = StaticSessionManager.SessionFactory.GetCurrentSession(); Can anyone suggest a better way to handle this? Thanks in advance! Rob

    Read the article

  • Minimal "Task Queue" with stock Linux tools to leverage Multicore CPU

    - by Manuel
    What is the best/easiest way to build a minimal task queue system for Linux using bash and common tools? I have a file with 9'000 lines, each line has a bash command line, the commands are completely independent. command 1 > Logs/1.log command 2 > Logs/2.log command 3 > Logs/3.log ... My box has more than one core and I want to execute X tasks at the same time. I searched the web for a good way to do this. Apparently, a lot of people have this problem but nobody has a good solution so far. It would be nice if the solution had the following features: can interpret more than one command (e.g. command; command) can interpret stream redirects on the lines (e.g. ls > /tmp/ls.txt) only uses common Linux tools Bonus points if it works on other Unix-clones without too exotic requirements.

    Read the article

  • Choosing between instance methods and separate functions?

    - by StackedCrooked
    Adding functionality to a class can be done by adding a method or by defining a function that takes an object as its first parameter. Most programmers that I know would choose for the solution of adding a instance method. However, I sometimes prefer to create a separate function. For example, in the example code below Area and Diagonal are defined as separate functions instead of methods. I find it better this way because I think these functions provide enhancements rather than core functionality. Is this considered a good/bad practice? If the answer is "it depends", then what are the rules for deciding between adding method or defining a separate function? class Rect { public: Rect(int x, int y, int w, int h) : mX(x), mY(y), mWidth(w), mHeight(h) { } int x() const { return mX; } int y() const { return mY; } int width() const { return mWidth; } int height() const { return mHeight; } private: int mX, mY, mWidth, mHeight; }; int Area(const Rect & inRect) { return inRect.width() * inRect.height(); } float Diagonal(const Rect & inRect) { return std::sqrt(std::pow(static_cast<float>(inRect.width()), 2) + std::pow(static_cast<float>(inRect.height()), 2)); }

    Read the article

  • How can you connect to a password protected MS Access Database from a Spring JdbcTemplate?

    - by Tim Visher
    I need to connect to a password protected MS Access 2003 DB using the JDBC-ODBC bridge. I can't find out how to specify the password in the connect string, or even if that is the correct method of connecting. It would probably be relevant to mention that this is a Spring App which is accessing the database through a JdbcTemplate configured as a datasource bean in our application context file. Some relevant snippets: from application-context.xml <bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate"> <property name="dataSource" ref="legacyDataSource" /> </bean> <bean id="jobsheetLocation" class="java.lang.String"> <constructor-arg value="${jobsheet.location}"/> </bean> <bean id="legacyDataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="${jdbc.legacy.driverClassName}" /> <property name="url" value="${jdbc.legacy.url}"/> <property name="password" value="-------------" /> </bean> from our build properties jdbc.legacy.driverClassName=sun.jdbc.odbc.JdbcOdbcDriver jdbc.legacy.url=jdbc:odbc:Driver\={Microsoft Access Driver (*.mdb)};Dbq\=@LegacyDbPath@;DriverID\=22;READONLY\=true Any thoughts?

    Read the article

  • Wrappers of primitive types in arraylist vs arrays

    - by ismail marmoush
    Hi, In "Core java 1" I've read CAUTION: An ArrayList is far less efficient than an int[] array because each value is separately wrapped inside an object. You would only want to use this construct for small collections when programmer convenience is more important than efficiency. But in my software I've already used Arraylist instead of normal arrays due to some requirements, though "The software is supposed to have high performance and after I've read the quoted text I started to panic!" one thing I can change is changing double variables to Double so as to prevent auto boxing and I don't know if that is worth it or not, in next sample algorithm public void multiply(final double val) { final int rows = getSize1(); final int cols = getSize2(); for (int i = 0; i < rows; i++) { for (int j = 0; j < cols; j++) { this.get(i).set(j, this.get(i).get(j) * val); } } } My question is does changing double to Double makes a difference ? or that's a micro optimizing that won't affect anything ? keep in mind I might be using large matrices.2nd Should I consider redesigning the whole program again ?

    Read the article

  • Mac OS X and static boost libs -> std::string fail

    - by Ionic
    Hi all, I'm experiencing some very weird problems with static boost libraries under Mac OS X 10.6.6. The error message is main(78485) malloc: *** error for object 0x1000e0b20: pointer being freed was not allocated *** set a breakpoint in malloc_error_break to debug [1] 78485 abort (core dumped) and a tiny bit of example code which will trigger this problem: #define BOOST_FILESYSTEM_VERSION 3 #include <boost/filesystem.hpp> #include <iostream> int main (int argc, char **argv) { std::cout << boost::filesystem::current_path ().string () << '\n'; } This problem always occurs when linking the static boost libraries into the binary. Linking dynamically will work fine, though. I've seen various reports for quite a similar OS X bug with GCC 4.2 and the _GLIBCXX_DEBUG macro set, but this one seems even more generic, as I'm neither using XCode, nor setting the macro (even undefining it does not help. I tried it just to make sure it's really not related to this problem.) Does anybody have any pointers to why this is happening or even maybe a solution (rather than using the dynamic library workaround)? Best regards, Mihai

    Read the article

  • dynamic JSF composite component styling/rendering

    - by Checkoff
    I've a little problem with a composite component. This component's implementation looks like: <composite:implementation> <h:outputStylesheet name="foo.css" library="bar"/> <div id="#{cc.clientId}"> <composite:insertChildren/> </div> </composite:implementation> It is included dynamically into a facelet page which include this component with JSTL core tags. The facelet page is similar to the following one. <h:panelGroup id="viewport" layout="block"> <c:if test="#{controller.object != null}"> <c:forEach items="#{controller.object.elements}" var="element"> <c:if test="#{element.type == 'type1'}"> <my:componentTypeOne id="#{element.id}"/> </c:if> <c:if test="#{element.type == 'type2'}"> <my:componentTypeTwo id="#{element.id}"/> </c:if> </c:forEach> </c:if> </h:panelGroup> So when I only render the viewport of the page the components are rendered but without the stylesheet defined within the composite component my:component. Is there any way to include the stylesheet on the fly without rendering the whole page? EDIT: extension of the example code..

    Read the article

  • How to print a image file in printer

    - by jackrobert
    Hi, I write a simple program to print a image in JSF.... I have one image (sampleImage.png).. Already i connected my pc to printer.... Manually i open the image and select print option , then i got image from printer.... Now i want print image using javascript.... <%@page contentType="text/html" pageEncoding="UTF-8"% <%@ taglib uri="http://java.sun.com/jsf/core" prefix="f" % <%@ taglib uri="http://java.sun.com/jsf/html" prefix="h" % <%@ taglib uri="http://richfaces.org/a4j" prefix="a4j" % <%@ taglib uri="http://richfaces.org/rich" prefix="rich"% <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Printer</title> <script type="text/javascript"> function printImage() { // Here i want write a script for print image } </script> <body> <h:form id="fileViewerForm"> <rich:panel> <f:facet name="header"> <h:outputText value="Printer"/> </f:facet> <h:commandButton value="PrintImage" onclick="printImage();"/> <rich:panel id="imageViewerPanel"> <h:graphicImage id="imageViewer" url="sampleImage.png" </rich:panel> </rich:panel> </h:form> </body> </html> help me about this..

    Read the article

  • delphi app freezes whole win7 system

    - by avar
    Hello i have a simple program that sorts a text file according to length of words per line this program works without problems in my xp based old machine now i run this program on my new win7/intel core i5 machine, it freezes whole system and back normal after it finishes it's work. i'v invastigated the code and found the line causing the freeze it was this specific line... caption := IntToStr(i) + '..' + IntTostr(ii); i'v changed it to caption := IntTostr(ii); //slow rate change and there is no freeze and then i'v changed it to caption := IntTostr(i); //fast rate change and it freeze again my main complete procedure code is var tword : widestring; i,ii,li : integer; begin tntlistbox1.items.LoadFromFile('d:\new folder\ch.txt'); tntlistbox2.items.LoadFromFile('d:\new folder\uy.txt'); For ii := 15 Downto 1 Do //slow change Begin For I := 0 To TntListBox1.items.Count - 1 Do //very fast change Begin caption := IntToStr(i) + '..' + IntTostr(ii); //problemetic line tword := TntListBox1.items[i]; LI := Length(tword); If lI = ii Then Begin tntlistbox3.items.Add(Trim(tntlistbox1.Items[i])); tntlistbox4.items.Add(Trim(tntlistbox2.Items[i])); End; End; End; end; any idea why ? and how to fix it? i use delphi 2007/win32

    Read the article

  • Cryptography: best practices for keys in memory?

    - by Johan
    Background: I got some data encrypted with AES (ie symmetric crypto) in a database. A server side application, running on a (assumed) secure and isolated Linux box, uses this data. It reads the encrypted data from the DB, and writes back encrypted data, only dealing with the unencrypted data in memory. So, in order to do this, the app is required to have the key stored in memory. The question is, is there any good best practices for this? Securing the key in memory. A few ideas: Keeping it in unswappable memory (for linux: setting SHM_LOCK with shmctl(2)?) Splitting the key over multiple memory locations. Encrypting the key. With what, and how to keep the...key key.. secure? Loading the key from file each time its required (slow and if the evildoer can read our memory, he can probably read our files too) Some scenarios on why the key might leak: evildoer getting hold of mem dump/core dump; bad bounds checking in code leading to information leakage; The first one seems like a good and pretty simple thing to do, but how about the rest? Other ideas? Any standard specifications/best practices? Thanks for any input!

    Read the article

  • Logging exceptions to database in NServiceBus

    - by IGoor
    If an exception occurs in my MessageHandler I want to write the exception details to my database. How do I do this? Obviously, I cant just catch the exception, write to database, and rethrow it since NSB rollbacks all changes. (IsTransactional is set to true) I tried adding logging functionality in a seperate handler, which I caledl using SendLocal if an exception occured, but this does not work: public void Handle(MessageItem message) { try { DoWork(); } catch(Exception exc) { Bus.SendLocal(new ExceptionMessage(exc.Message)); throw; } } I also tried using Log4Net with a custom appender, but this also rolled back. Configure.With() .Log4Net<DatabaseAppender>(a => a.Log = "Log") appender: public class DatabaseAppender : log4net.Appender.AppenderSkeleton { public string Log { get; set; } protected override void Append(log4net.Core.LoggingEvent loggingEvent) { if (loggingEvent.ExceptionObject != null) WriteToDatabase(loggingEvent.ExceptionObject); } } Is there anyway to log unhandled exceptions in the messagehandler when IsTransactional is true? Thanks in advance.

    Read the article

< Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >