Search Results

Search found 9403 results on 377 pages for 'high tech'.

Page 313/377 | < Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >

  • CIL and JVM Little endian to big endian in c# and java

    - by Haythem
    Hello, I am using on the client C# where I am converting double values to byte array. I am using java on the server and I am using writeDouble and readDouble to convert double values to byte arrays. The problem is the double values from java at the end are not the double values at the begin giving to c# writeDouble in Java Converts the double argument to a long using the doubleToLongBits method , and then writes that long value to the underlying output stream as an 8-byte quantity, high byte first. DoubleToLongBits Returns a representation of the specified floating-point value according to the IEEE 754 floating-point "double format" bit layout. The Program on the server is waiting of 64-102-112-0-0-0-0-0 from C# to convert it to 1700.0 but he si becoming 0000014415464 from c# after c# converted 1700.0 this is my code in c#: class User { double workingStatus; public void persist() { byte[] dataByte; using (MemoryStream ms = new MemoryStream()) { using (BinaryWriter bw = new BinaryWriter(ms)) { bw.Write(workingStatus); bw.Flush(); bw.Close(); } dataByte = ms.ToArray(); for (int j = 0; j < dataByte.Length; j++) { Console.Write(dataByte[j]); } } public double WorkingStatus { get { return workingStatus; } set { workingStatus = value; } } } class Test { static void Main() { User user = new User(); user.WorkingStatus = 1700.0; user.persist(); } thank you for the help.

    Read the article

  • Should I store generated code in source control

    - by Ron Harlev
    This is a debate I'm taking a part in. I would like to get more opinions and points of view. We have some classes that are generated in build time to handle DB operations (in This specific case, with SubSonic, but I don't think it is very important for the question). The generation is set as a pre-build step in Visual Studio. So every time a developer (or the official build process) runs a build, these classes are generated, and then compiled into the project. Now some people are claiming, that having these classes saved in source control could cause confusion, in case the code you get, doesn't match what would have been generated in your own environment. I would like to have a way to trace back the history of the code, even if it is usually treated as a black box. Any arguments or counter arguments? UPDATE: I asked this question since I really believed there is one definitive answer. Looking at all the responses, I could say with high level of certainty, that there is no such answer. The decision should be made based on more than one parameter. Reading the answers below could provide a very good guideline to the types of questions you should be asking yourself when having to decide on this issue. I won't select an accepted answer at this point for the reasons mentioned above.

    Read the article

  • https not redirecting to mongrel upstream

    - by kip
    Normal http is working fine for me with nginx and mongrel, however when i attempt to use https I am directed to the "welcome to nginx page". http { # passenger_root /opt/passenger-2.2.11; # passenger_ruby /usr/bin/ruby1.8; include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; upstream mongrel { server 00.000.000.000:8000; server 00.000.000.000:8001; } server { listen 443; server_name domain.com; ssl on; ssl_certificate /etc/ssl/localcerts/domain_combined.crt; ssl_certificate_key /etc/ssl/localcerts/www.domain.com.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; location / { root /current/public/; index index.html index.htm; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://mongrel; } } }

    Read the article

  • Linux 2.6.31 Scheduler and Multithreaded Jobs

    - by dsimcha
    I run massively parallel scientific computing jobs on a shared Linux computer with 24 cores. Most of the time my jobs are capable of scaling to 24 cores when nothing else is running on this computer. However, it seems like when even one single-threaded job that isn't mine is running, my 24-thread jobs (which I set for high nice values) only manage to get ~1800% CPU (using Linux notation). Meanwhile, about 500% of the CPU cycles (again, using Linux notation) are idle. Can anyone explain this behavior and what I can do about it to get all of the 23 cores that aren't being used by someone else? Notes: In case it's relevant, I have observed this on slightly different kernel versions, though I can't remember which off the top of my head. The CPU architecture is x64. Is it at all possible that the fact that my 24-core jobs are 32-bit and the other jobs I'm competing w/ are 64-bit is relevant? Edit: One thing I just noticed is that going up to 30 threads seems to alleviate the problem to some degree. It gets me up to ~2100% CPU.

    Read the article

  • How does real-time collaboration with multiple clients work in a system using operation transformati

    - by Saikat Chakrabarti
    I just finished reading High-Latency, Low-Bandwidth Windowing in the Jupiter Collaboration System and I mostly followed everything until part 6: global consistency. This part describes how the system described in the paper can be extended to accomodate for multiple clients connected to the server. However, the explanation is very short and essentially says the system will work if the central server merely forwards client messages to all the other clients. I don't really understand how this works though. What state vector would be sent in the message that is sent to all the other clients? Does the server maintain separate state vectors for each client? Does it maintain a separate copy of the widgets locally for each client? The simple example I can think of is this setup: imagine client A, server, and client B with client A and client B both connected to the server. To start, all three have the state object "ABCD". Then, client A sends the message "insert character F at position 0" at the same time client B sends the message "insert character G at position 0" to the server. It seems like simply relaying client A's message to client B and vice versa doesn't actually handle this case. So what exactly does the server do?

    Read the article

  • Getting SSL to work with Apache/Passenger on OSX

    - by jonnii
    I use apache/passenger on my development machine, but need to add SSL support (something which isn't exposed through the control panel). I've done this before in production, but for some reason I can't seem to get it work on OSX. The steps I've followed so far are from a default apache osx install: Install passenger and passenger preference pane. Add my rails app (this works) Create my ca.key, server.crt and server.key as detailed on the apple website. At this point I need to start editing the apache configs, so I added: # Apache knows to listen on port 443 for ssl requests. Listen 443 Listen 80 I thought I'd try editing the passenger pref pane generated config first to get everything working, when I add: It starts off looking like this <VirtualHost *:80> ServerName myapp.local DocumentRoot "/Users/jonnii/programming/ruby/myapp/public" RailsEnv development <Directory "/Users/jonnii/programming/ruby/myapp/public"> Order allow,deny Allow from all </Directory> </VirtualHost> I then append this: <VirtualHost *:443> ServerName myapp.local DocumentRoot "/Users/jonnii/programming/ruby/myapp/public" RailsEnv development <directory "/Users/jonnii/programming/ruby/myapp/public"> Order allow,deny Allow from all </directory> # SSL Configuration SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP SSLOptions +FakeBasicAuth +ExportCertData +StdEnvVars +StrictRequire #Self Signed certificates SSLCertificateFile /private/etc/apache2/ssl.key/server.crt SSLCertificateKeyFile /private/etc/apache2/ssl.key/server.key SSLCertificateChainFile /private/etc/apache2/ssl.key/ca.crt SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown downgrade-1.0 force-response-1.0 </VirtualHost> The files referenced all exist (I doubled checked that), but now when I restart my apache I can't even get to myapp.local. However apache can still server the default page when I click on it in the sharing preference panel. Any help would be greatly appreciated.

    Read the article

  • Cooperative/Non-preemptive threading avoiding threadlooks?

    - by Wayne
    Any creative ideas to avoid deadlocks on a yield or sleep with cooperative/non-preemptive multitasking without doing an O/S Thread.Sleep(10)? Typically the yield or sleep call will call back into the scheduler to run other tasks. But this can sometime produce deadlocks. Some background: This application has enormous need for speed and, so far, it's extremely fast as compared to other systems in the same industry. One of the speed techniques is cooperative/non-preemptive threading rather then the cost of a context switch from O/S threads. The high level design a priority manager which calls out to tasks depending on priority and processing time. Each task does one "iteration" of work and returns to wait its turn again in the priority queue. The tricky thing with non-preemptive threading is what to do when you want to a particular task to stop in the middle of work and wait for some other event from a different task before continuing. In this case, we have 3 tasks, A B and C where A is a controller that must synchronize the activity of B and C. First, A starts both B and C. Then B yields so C gets invoked. When C yields, A sees they are both inactive, decides it's time for B to run but not time for C yet. Well B is now stuck in a yield that has called C, so it can never run. Sincerely, Wayne

    Read the article

  • Seeking reporting or templating tool to generate large formatted PDF reports from dataset

    - by Mr. Tacos
    Say I have some data in MySQL or a big ole CSV file. I also have a report. It's a PDF, call it 100 pages long. I need to generate variations on this PDF for slices of the data. More specific example: I have a CSV file with each StackOverflow user in a row and each column contains various statistics about that user. I have a report called "Your StackOverflow Performance". Its got lots of text, always the same, but each section contains something like: "You Vs. The Average StackOverflow Poster on this metric". I want a table that appears there that has the average data, which is the same in every run of the PDF, in one column. In the second column, I want your data, which is different for each PDF/row in the CSV file/user of StackOverflow. I'm pretty sure people use things like Crystal for this? Is there something in MS SQL Server that's good for this? An open source template language? I'm not even really sure if what I need is called a 'reporting' tool (since I don't really need to do any crunching, the data in this case is being crunched by a series of scripts and SPSS, I don't need bands and subbands and so on) or 'templating'. Is there even such a thing as templating PDFs? Natch, I'd be fine with something that generates output easily scriptable to PDF, like eps, but not something like HTML. The report formatting is fussy and done and externally determined and handed down from on high. It's print-oriented, not webby. Thanks in advance.

    Read the article

  • Change Data Capture or Change Tracking - Same as Traditional Audit Trail Table?

    - by HardCode
    Before I delve into the abyss of Microsoft documentation any deeper, I'd like to know if someone experienced with Change Data Capture and Change Tracking know if one or both of these can be used to replace the traditional ... "Audit trail table copy of the 'real table' (all of the fields of the original table, plus date/time, user ID, and DML action field) inserted into by Triggers" ... setup for a database table audit trail, where the trigger populates the audit trail table (which is all manual work). The MSDN overview documentation explains at a high level what Change Data Capture and Change Tracking are, but it isn't clear enough to me, and doesn't state outright, that these tools can be used to replace the traditional audit trail tables we've made so often. Can someone with any experience using Change Data Capture and Change Tracking save me a lot of time, or confirm that I am spending time looking at the right tool? The critical part of our audit trail is capturing all changes to a table's fields (on INSERT, UPDATE, DELETE), when it happened, and who did it. These changes are commonly provided to an end user chronologically via an audit trail report. Which is another question ... Change Data Capture or Change Tracking is the solution, I'd assume that this data can be queried just like data from a normal table? EDIT: I need a permanent audit trail, irregardless of time. I see that Change Data Capture has to do with the transaction logs, so this sounds finite to me.

    Read the article

  • Configure IIS7 to server static content through ASP.NET Runtime

    - by Anton Gogolev
    I searched high an low and still cannot find a definite answer. How do I configure IIS 7.0 or a Web Application in IIS so that ASP.NET Runtime will handle all requests -- including ones to static files like *.js, *.gif, etc? What I'm trying to do is as follows. We have kind of SaaSy site, which we can "skin" for every customer. "Skinnig" means developing a custom master page and using a bunch of *.css and other images. Quite naturally, I'm using VirtualPathProvider, which operates like this: public override System.Web.Hosting.VirtualFile GetFile(string virtualPath) { if(PhysicalFileExists(virtualPath)) { var virtualFile = base.GetFile(virtualPath); return virtualFile; } if(VirtualFileExists(virtualPath)) { var brandedVirtualPath = GetBrandedVirtualPath(virtualPath); var absolutePath = HttpContext.Current.Server.MapPath(brandedVirtualPath); Trace.WriteLine(string.Format("Serving '{0}' from '{1}'", brandedVirtualPath, absolutePath), "BrandingAwareVirtualPathProvider"); var virtualFile = new VirtualFile(brandedVirtualPath, absolutePath); return virtualFile; } return null; } The basic idea is as follows: we have a branding folder inside our webapp, which in turn contains folders for each "brand", with "brand" being equal to host name. That is, requests to http://foo.example.com/ should use static files from branding/foo_example_com, whereas http://bar.example.com/ should use content from branding/bar_example_com. Now what I want IIS to do is to forward all requests to static files to StaticFileHandler, which would then use this whole "infrastructure" and serve correct files. However, try as I might, I cannot configure IIS to do this.

    Read the article

  • Understanding OOP Principles in passing around objects/values

    - by Hans
    I'm not quite grokking a couple of things in OOP and I'm going to use a fictional understanding of SO to see if I can get help understand. So, on this page we have a question. You can comment on the question. There are also answers. You can comment on the answers. Question - comment - comment - comment Answer -comment Answer -comment -comment -comment Answer -comment -comment So, I'm imagining a very high level understanding of this type of system (in PHP, not .Net as I am not yet familiar with .Net) would be like: $question = new Question; $question->load($this_question_id); // from the URL probably echo $question->getTitle(); To load the answers, I imagine it's something like this ("A"): $answers = new Answers; $answers->loadFromQuestion($question->getID()); // or $answers->loadFromQuestion($this_question_id); while($answer = $answers->getAnswer()) { echo $answer->showFormatted(); } Or, would you do ("B"): $answers->setQuestion($question); // inject the whole obj, so we have access to all the data and public methods in $question $answers->loadFromQuestion(); // the ID would be found via $this->question->getID() instead of from the argument passed in while($answer = $answers->getAnswer()) { echo $answer->showFormatted(); } I guess my problem is, I don't know when or if I should be passing in an entire object, and when I should just be passing in a value. Passing in the entire object gives me a lot of flexibility, but it's more memory and subject to change, I'd guess (like a property or method rename). If "A" style is better, why not just use a function? OOP seems pointless here. Thanks, Hans

    Read the article

  • How to properly combine two files in XAML in Microsoft Blend?

    - by MartyIX
    Hello, I have a test project with the file MainWindow.xaml with the content: <Window x:Class="MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:ad="clr-namespace:AvalonDock;assembly=AvalonDock" xmlns:diag="clr-namespace:System.Diagnostics;assembly=WindowsBase" xmlns:view="clr-namespace:Sokoban.View;assembly=Solvers" Title="Window1" Height="300" Width="300" Loaded="Window_Loaded"> <ad:DockingManager x:Name="dockingManager"> <ad:ResizingPanel Orientation="Vertical"> <view:Solvers x:Name="solvers" diag:PresentationTraceSources.TraceLevel="High" /> <!-- LINE BELOW DEMONSTRATES WORKING CODE INSTEAD OF LINE ABOVE --> <!--<ad:DocumentPane Name="GamesDocumentPane" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <ad:DockableContent x:Name="classesContent" Title="Classes"> <TextBlock>test</TextBlock> </ad:DockableContent> </ad:DocumentPane>--> </ad:ResizingPanel> </ad:DockingManager> </Window> and in another project I have the file Solvers.xaml: <ad:DocumentPane x:Class="Sokoban.View.Solvers" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:ad="clr-namespace:AvalonDock;assembly=AvalonDock" xmlns:diag="clr-namespace:System.Diagnostics;assembly=WindowsBase" Name="GamesDocumentPane" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> </ad:DocumentPane> When I open my Visual Studio solution in Microsoft Blend 4 then I see the error: InvalidOperationException: DocumentPane must be put under a DockingManager! when I open either MainWindow.xaml or Solvers.xaml. It is all right in Solvers.xaml because there really is no DockingManager but MainWindow.xaml should work, shouldn't it? How to solve the problem? Note: It seems to me that the files are processed separately and because the file Solvers.xaml contains the error the MainWindow.xaml file also contains the very same error. Note 2: XAML files use AvalonDock library Is there a way how to say that Solvers.xaml is only an extension of another file? Thank you for any help!

    Read the article

  • DOS batch file to enter commands in proprietary java app and receive feedback?

    - by Justine
    Hello, I'm working on a project in which I'd like to be able to turn lights on and off in the Duke Smart Home via a high frequency chirp. The lighting system is called Clipsal Square-D and the program that gives a user access to the lighting controls is called CGate. I was planning on doing some signal processing in Matlab, then create a batch file from Matlab to interact with Cgate. Cgate is a proprietary Java app that, if run from a DOS command line, opens up another window that looks like the command prompt. I have a batch file that can check to see if Cgate is running and if not, open it. But what I can't figure out how to do is actually run commands in the Cgate program from the batch file and likewise, take the response from Cgate. An example of such a command is "noop," which should return "200 OK." Any help would be much appreciated! Thank you very much in advance :) (here's my existing batch file by the way) @ECHO off goto checkIfOpen :checkIfOpen REM pv finds all open processes and puts it in result.txt %SystemRoot%\pv\pv.exe %SystemRoot%\pv\pv.exe result.txt REM if result has the word notepad in it then notepad is running REM if not then it opens notepad FIND "notepad.exe" result.txt IF ERRORLEVEL 1 START %SystemRoot%\system32\Clipsal\C-Gate2\cgate.exe goto end :end

    Read the article

  • Silverlight 4 seems like starving of memory

    - by Marco
    I have been playing a bit with Silverlight and try to port my Silverlight 3.0 application to Silverlight 4.0. My application loads different XAP files and upon a user request create an instance of a Xaml user control and adds it to the main container, in a sort of MEF approach in order I can have an extensible and pluggable application. The application is pretty huge and to keep acceptable the performances and the initial loading I have built up some helper classes to load in the background all pages and user controls that might be used later on. On Silverlight 3.0 everything was running smoothly without any problem so far. Switching to SL 4.0 I have noticed that when the process approaches to create the instances of the user controls the layout freezes unexpectedly for a minute and sometimes for more. Looking at the task manager the memory usage of IE jumps from 50MB to 400MB and sometimes up to 1.5 GB. If the process won't take that much the layout is rendered properly even though the memory usage is still extremely high. Otherwise everything crashes due to out of memory exception. Running the same application compiled in SL3, the memory used is about 200MB when all the usercontrols are loaded. Time spent to load the application in SL3 is about 10 seconds, while it takes up to 3 mins in SL4 There are no transparencies, no opacities set, no effects and animations in the layout. User controls are instantied on the fly and added or removed in the visual tree on purpose when the user switches from one screen to another. The resources are all cleaned properly when a usercontrol is removed from the visual tree to allow the GC to operate in the background. I may do something wrong but I could not figure out where exactly nail out the source of this problem. As far as I know there is no memory profiler in SL4 that can help me out to find where to look at. But again I could not be updated on new debugging tools available.

    Read the article

  • Which technology is best suited to store and query a huge readonly graph?

    - by asmaier
    I have a huge directed graph: It consists of 1.6 million nodes and 30 million edges. I want the users to be able to find all the shortest connections (including incoming and outgoing edges) between two nodes of the graph (via a web interface). At the moment I have stored the graph in a PostgreSQL database. But that solution is not very efficient and elegant, I basically need to store all the edges of the graph twice (see my question PostgreSQL: How to optimize my database for storing and querying a huge graph). It was suggested to me to use a GraphDB like neo4j or AllegroGraph. However the free version of AllegroGraph is limited to 50 million nodes and also has a very high-level API (RDF), which seems too powerful and complex for my problem. Neo4j on the other hand has only a very low level API (and the python interface is not mature yet). Both of them seem to be more suited for problems, where nodes and edges are frequently added or removed to a graph. For a simple search on a graph, these GraphDBs seem to be too complex. One idea I had would be to "misuse" a search engine like Lucene for the job, since I'm basically only searching connections in a graph. Another idea would be, to have a server process, storing the whole graph (500MB to 1GB) in memory. The clients could then query the server process and could transverse the graph very quickly, since the graph is stored in memory. Is there an easy possibility to write such a server (preferably in Python) using some existing framework? Which technology would you use to store and query such a huge readonly graph?

    Read the article

  • Mac OS X: Getting detailed process information (specifically its launch arguments) for arbitrary run

    - by Jasarien
    I am trying to detect when particular applications are launched. Currently I am using NSWorkspace, registering for the "did launch application" notification. I also use the runningApplications method to get apps that are currently running when my app starts. For most apps, the name of the app bundle is enough. I have a plist of "known apps" that I cross check with the name of that passed in the notification. This works fine until you come across an app that acts as a proxy for launching another application using command line arguments. Example: The newly released Portal on the Mac doesn't have a dedicated app bundle. Steam can create a shortcut, which serves as nothing more than to launch the hl2_osx app with the -game argument and portal as it's parameter. Since more Source based games are heading to the Mac, I imagine they'll use the same method to launch, effectively running the hl2_osx app with the -game argument. Is there a nice way to get a list of the arguments (and their parameters) using a Cocoa API? NSProcessInfo comes close, offering an `-arguments' method, but only provides information for its own process... NSRunningApplication offers the ability to get information about arbitrary apps using a PID, but no command line args... Is there anything that fills the gap between the two? I'm trying not to go down the route of spawning an NSTask to run ps -p [pid] and parsing the output... I'd prefer something more high level.

    Read the article

  • Writing a JMS Publisher without "public static void main"

    - by The Elite Gentleman
    Hi guys, Every example I've seen on the web, e.g. http://www.codeproject.com/KB/docview/jms_to_jms_bridge_activem.aspx, creates a publisher and subscriber with a public static void main method. I don't think that'll work for my web application. I'm learning JMS and I've setup Apache ActiveMQ to run on JBoss 5 and Tomcat 6 (with no glitches). I'm writing a messaging JMS service that needs to send email asynchronously. I've already written a JMS subscriber that receives the message (the class inherits MessageListener). My question is simple: How do I write a publisher that will so that my web applications can call it? Does it have to be published somewhere? My thought is to create a publisher with a no-attribute constructor (in there) and get the MessageQueue Factory, etc. from the JNDI pool (in the constructor). Is my idea correct? How do I subscribe my subscriber to the Queue Receiver? (So far, the subscriber has no constructor, and if I write a constructor, do I always subscribe myself to the Queue receiver?) Thanks for your help, sorry if my terminology is not up to scratch, there are too many java terminologies that I get lost sometimes (maybe a java GPS will do! :-) ) PS Is there a tutorial out there that explains how to write a "better" (better can mean anything, but in my case it's all about performance in high demand requests) JMS Publisher and Subscriber that I can run on Application Server such as JBoss or Glassfish? Don't forget that the JMS application will needs a "guarantee" uptime as many applications will use this.

    Read the article

  • Where to find algorithms for standard math functions?

    - by dsimcha
    I'm looking to submit a patch to the D programming language standard library that will allow much of std.math to be evaluated at compile time using the compile-time function evaluation facilities of the language. Compile-time function evaluation has several limitations, the most important ones being: You can't use assembly language. You can't call C code or code for which the source is otherwise unavailable. Several std.math functions violate these and compile-time versions need to be written. Where can I get information on good algorithms for computing things such as logarithms, exponents, powers, and trig functions? I prefer just high level descriptions of algorithms to actual code, for two reasons: To avoid legal ambiguity and the need to make my code look "different enough" from the source to make sure I own the copyright. I want simple, portable algorithms. I don't care about micro-optimization as long as they're at least asymptotically efficient. Edit: D's compile time function evaluation model allows floating point results computed at compile time to differ from those computed at runtime anyhow, so I don't care if my compile-time algorithms don't give exactly the same result as the runtime version as long as they aren't less accurate to a practically significant extent.

    Read the article

  • Delphi 2009 dbExpress and Interbase: Unicode migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally. Update: one problem I found now is that there are two different persistent field types for Unicode and non Unicode character fields. For the existing database, dbExpress creates TStringField objects. For the Unicode database fields, dbExpress creates (or expects!) TWideStringField objects. This looks like a lot of work lies ahead. While we could try to avoid persistent fields (and add calculated fields at run time), Of course we would prefer a solution which does not require so many changes in existing units and DFM files.

    Read the article

  • Can pdflatex (or any tex package) automatically rescale included images which have been reduced in s

    - by drfrogsplat
    I'm writing my thesis in LaTeX, generating it with pdflatex. I have a large number of figures, many of which are bitmaps (as opposed to SVG) in PNG/JPEG format. I've generally created them to be fairly high resolution (say 1600x1200-ish) to ensure that whatever size they end up in the document, they'll be at least 300dpi when printed. As I'm writing/laying out the document, I'm including graphics (using \includegraphics from the graphicx package) and setting widths/heights as appropriate (e.g. subfigures are quite small). I don't need the images to be any more than about 300 dpi at best, so where I have shrunk a 1600x1200 image down to say 5cm, the image is now at 800 dpi. So despite including some very small (on the page) images, the PDF is becoming quite large. Is there a way to tell pdflatex or graphicx (or something else involved?) to convert all images to a maximum of 300 dpi, based on the dimensions I'm setting with say \includegraphics[width=2in]{filename}? i.e. so it scales the image to a max of 600x600 pixels as it includes it in the PDF (leaving the original file untouched). I know I can resize the original images with various command line applications, and include the pre-resized versions, but given the images vary in size considerably, it wouldn't be as simple as making sure they're all 300dpi for a constant printed size. It'd also be nice to be able to easily create different versions of PDFs (web vs final print) without resizing images manually, so that the 'web' PDF capped images at say 72-100 dpi while the final print one could cap at 600 (if at all).

    Read the article

  • Mouse bugginess - SWFObject, Firefox 3 for Mac, and Flash

    - by justinbach
    I'm pulling my hair out over a problem I'm encountering on Firefox 3.5 & 3.6 on OS X. I'm using SWFobject to embed an AmMap of the US, which has rollover tooltips for various states. The rollovers are working fine in every other browser I've tested, but they're very buggy on FF for Mac--most of the time they don't show up at all, but if I persistently click a state that's supposed to have a hover event, I might catch a glimpse of the tooltip. Here's the code for the SWFObject embed (incidentally, this isn't being done in the document head due to templating reasons). The reason that the SWFObject initialization is wrapped in Jquery's document.ready handler is that the swf wasn't even appearing in FF 3.5.9 for mac until I added that in: $(document).ready(function() { var params = { quality: "high", scale: "noscale", allowscriptaccess: "always", allowfullscreen: "true", bgcolor: "#FFFFFF", base:"/<?php print LANG . "/locations/" ?>" }; var flashvars = { path: "", settings_file: "mapsettings", data_file:"mapdata" }; var attributes = { id: "flashmap", name: "flashmap" }; swfobject.embedSWF("/assets/flash/ammap.swf", "flashmap", "470", "300", "8", null, flashvars, params, attributes); }); Any feedback would be greatly appreciate...site goes live in 48 hours! Thanks!

    Read the article

  • How do we greatly optimize our MySQL database (or replace it) when using joins?

    - by jkaz
    Hi there, This is the first time I'm approaching an extremely high-volume situation. This is an ad server based on MySQL. However, the query that is used incorporates a lot of JOINs and is generally just slow. (This is Rails ActiveRecord, btw) sel = Ads.find(:all, :select = '*', :joins = "JOIN campaigns ON ads.campaign_id = campaigns.id JOIN users ON campaigns.user_id = users.id LEFT JOIN countries ON countries.campaign_id = campaigns.id LEFT JOIN keywords ON keywords.campaign_id = campaigns.id", :conditions = [flashstr + "keywords.word = ? AND ads.format = ? AND campaigns.cenabled = 1 AND (countries.country IS NULL OR countries.country = ?) AND ads.enabled = 1 AND campaigns.dailyenabled = 1 AND users.uenabled = 1", kw, format, viewer['country'][0]], :order = order, :limit = limit) My questions: Is there an alternative database like MySQL that has JOIN support, but is much faster? (I know there's Postgre, still evaluating it.) Otherwise, would firing up a MySQL instance, loading a local database into memory and re-loading that every 5 minutes help? Otherwise, is there any way I could switch this entire operation to Redis or Cassandra, and somehow change the JOIN behavior to match the (non-JOIN-able) nature of NoSQL? Thank you!

    Read the article

  • How do you concat multiple rows into one column in SQL Server?

    - by Jason
    I've searched high and low for the answer to this, but I can't figure it out. I'm relatively new to SQL Server and don't quite have the syntax down yet. I have this datastructure (simplified): Table "Users" | Table "Tags": UserID UserName | TagID UserID PhotoID 1 Bob | 1 1 1 2 Bill | 2 2 1 3 Jane | 3 3 1 4 Sam | 4 2 2 ----------------------------------------------------- Table "Photos": | Table "Albums": PhotoID UserID AlbumID | AlbumID UserID 1 1 1 | 1 1 2 1 1 | 2 3 3 1 1 | 3 2 4 3 2 | 5 3 2 | I'm looking for a way to get the all the photo info (easy) plus all the tags for that photo concatenated like CONCAT(username, ', ') AS Tags of course with the last comma removed. I'm having a bear of a time trying to do this. I've tried the method in this article but I get an error when I try to run the query saying that I can't use DECLARE statements... do you guys have any idea how this can be done? I'm using VS08 and whatever DB is installed in it (I normally use MySQL so I don't know what flavor of DB this really is... it's an .mdf file?)

    Read the article

  • Neural Network settings for fast training

    - by danpalmer
    I am creating a tool for predicting the time and cost of software projects based on past data. The tool uses a neural network to do this and so far, the results are promising, but I think I can do a lot more optimisation just by changing the properties of the network. There don't seem to be any rules or even many best-practices when it comes to these settings so if anyone with experience could help me I would greatly appreciate it. The input data is made up of a series of integers that could go up as high as the user wants to go, but most will be under 100,000 I would have thought. Some will be as low as 1. They are details like number of people on a project and the cost of a project, as well as details about database entities and use cases. There are 10 inputs in total and 2 outputs (the time and cost). I am using Resilient Propagation to train the network. Currently it has: 10 input nodes, 1 hidden layer with 5 nodes and 2 output nodes. I am training to get under a 5% error rate. The algorithm must run on a webserver so I have put in a measure to stop training when it looks like it isn't going anywhere. This is set to 10,000 training iterations. Currently, when I try to train it with some data that is a bit varied, but well within the limits of what we expect users to put into it, it takes a long time to train, hitting the 10,000 iteration limit over and over again. This is the first time I have used a neural network and I don't really know what to expect. If you could give me some hints on what sort of settings I should be using for the network and for the iteration limit I would greatly appreciate it. Thank you!

    Read the article

  • Automatically install and launch a code-signed application from Safari

    - by Thomas Jung
    Is it possible and if so what are the steps necessary to package (or build) a Mac OS X application and code-sign it so that it can be downloaded with Safari and automatically launch? ... possibly after the user responds to some sort of dialog explaining that it is a signed application and the publisher has been verified. An example of the user experience I am trying to create is "installing Google Chrome for the first time on Windows", which is a 3-click, less-than-a-minute process. For the concerned among you: I am not trying to create a drive-by download. I am fine with some sort of intermittent user step approving the download. I just want to make the installation as quick and painless as possible and not require the user drag the app from a mounted DMG into the application folder. This may not 100% jibe with established Mac OS X user interaction guidelines, but it would work better for the not-power users. I only need the high-level steps or pointers to resources ... my google-fu was weak on this one.

    Read the article

< Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >