Search Results

Search found 12765 results on 511 pages for 'format()'.

Page 83/511 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Error with postgres and Rails in Bundle Install on Ubuntu 12.10

    - by jason328
    I'm trying to install postgres onto Ubuntu. When running the Bundle Install in terminal I'm receiving this message at the end of the running code. How do I get pg to install properly? Libpq-dev is installed as well. Using coffee-rails (3.2.2) Using diff-lcs (1.1.3) Using jquery-rails (2.0.2) Installing pg (0.12.2) with native extensions Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9.1 extconf.rb checking for pg_config... yes Using config values from /usr/bin/pg_config checking for libpq-fe.h... yes checking for libpq/libpq-fs.h... yes checking for PQconnectdb() in -lpq... yes checking for PQconnectionUsedPassword()... yes checking for PQisthreadsafe()... yes checking for PQprepare()... yes checking for PQexecParams()... yes checking for PQescapeString()... yes checking for PQescapeStringConn()... yes checking for PQgetCancel()... yes checking for lo_create()... yes checking for pg_encoding_to_char()... yes checking for PQsetClientEncoding()... yes checking for rb_encdb_alias()... yes checking for rb_enc_alias()... no checking for struct pgNotify.extra in libpq-fe.h... yes checking for unistd.h... yes checking for ruby/st.h... yes creating extconf.h creating Makefile make compiling compat.c compiling pg.c pg.c: In function ‘pgconn_wait_for_notify’: pg.c:2117:3: warning: ‘rb_thread_select’ is deprecated (declared at /usr/include/ruby- 1.9.1/ruby/intern.h:379) [-Wdeprecated-declarations] pg.c: In function ‘pgconn_block’: pg.c:2592:3: error: format not a string literal and no format arguments [-Werror=format- security] pg.c:2598:3: warning: ‘rb_thread_select’ is deprecated (declared at /usr/include/ruby- 1.9.1/ruby/intern.h:379) [-Wdeprecated-declarations] pg.c:2607:4: error: format not a string literal and no format arguments [-Werror=format- security] pg.c: In function ‘pgconn_locreate’: pg.c:2866:11: warning: variable ‘lo_oid’ set but not used [-Wunused-but-set-variable] pg.c: In function ‘find_or_create_johab’: pg.c:3947:3: warning: implicit declaration of function ‘rb_encdb_alias’ [-Wimplicit- function-declaration] cc1: some warnings being treated as errors make: *** [pg.o] Error 1 Gem files will remain installed in /home/jason/.bundler/tmp/10083/gems/pg-0.12.2 for inspection. Results logged to /home/jason/.bundler/tmp/10083/gems/pg-0.12.2/ext/gem_make.out An error occurred while installing pg (0.12.2), and Bundler cannot continue. Make sure that `gem install pg -v '0.12.2'` succeeds before bundling.

    Read the article

  • Convert png sequence to x264 with ffmpeg

    - by Thucydides411
    I am trying to convert a series of pngs into an mp4 video. I am using ffmpeg, and want to encode the video with the x264 codec. Using the command ffmpeg -y -r 30 -b 1800k -i _tmp%04d.png -vcodec libx264 out.mp4 I get the following warning message Incompatible pixel format 'bgra' for codec 'libx264', auto-selecting format 'yuv420p' My understanding is that there is an alpha channel in the pngs, which the x264 encoder cannot handle. Is there a way to get around this problem? Is there, for example, a way to get the encoder to ignore the alpha channel (my pngs don't actually have any transparent elements)? I'm aware that I could batch convert the pngs beforehand to strip the alpha channel, but the sequence of images is produced by another program, and having to preprocess the images each time I make a video would be less than optimal. Edit: After stripping the alpha channel from each frame using the command convert in.png -background white -flatten +matte out.png ffmpeg gives the warning message Incompatible pixel format 'pal8' for codec 'libx264', auto-selecting format 'yuv420p' so still no dice.

    Read the article

  • awstats parse of postfix mail log drops all records

    - by accidental admin
    I'm trying to get awstats to parse the postfix mail log, but it drops allmost all entries with messages like: Corrupted record (date 20091204042837 lower than 20091211065829-20000): 2009-12-04 04:28:37 root root localhost 127.0.0.1 SMTP - 1 17480 Few more are dropped with an invalid LogFormat: Corrupted record line 24 (record format does not match LogFormat parameter): 2009-11-16 04: 28:22 root root localhost 127.0.0.1 SMTP - 14755 My conf LogFormat="%time2 %email %email_r %host %host_r %method %url %code %bytesd" I believe matches the log format (and besides is the log format I've seen everywhere for awstats mail parsing). Besides, is the same entry format as all the other entries in the mail log. Whatever is left is dropped too: Dropped record (host localhost and 127.0.0.1 not qualified by SkipHosts): 2009-12-07 04:28:36 root root localhost 127.0.0.1 SMTP - 1 17152 I added SkipHosts="" to the .conf file but to no avail. I feel like awstats really has some personal quarrel with me today.

    Read the article

  • Scientific notation in Excel

    - by Vojtech R.
    Hi, I need make Number Format like scientific notation, but without E nor e. Just classic like this: (In latex its 2.3\times10^3) Maybe excel doesn't support this format. (I have on mind Number Format - for hundreds numbers - not in math formula)

    Read the article

  • What tools exist for generating "ASCII Tables", if any?

    - by Billy ONeal
    Consider a block like the following: +-----------------------------------+--------------------------+ | In Baseline | Not in Baseline | +---------------+===================================+==========================+ | In Parent # Do Not Report | Mark ACE as AlwaysReport | +---------------+-----------------------------------+--------------------------+ | Not In Parent # Iff parent depth > baseline depth | Report Always | +---------------+-----------------------------------+--------------------------+ I have seen tables like this used quite frequently. For instance, in Requests for Comments (RFC) documents, the standard format of the document is the text format. Another common case is embedding a small table like this into comments of source code. Are there tools which can take a quick and dirty representation of this in Excel, or possibly some textual format, and format it as a table like this?

    Read the article

  • ffserver-2.2 - streaming an ASF video as Webm output with ffserver on Debian 7.5

    - by Emmanuel Brunet
    I'm trying to stream an IP webcam ASF live stream to a ffserver to output a webm video format. The server starts successfully but the ffserver commands used to feed the ffserver fails and generates a core dump. Input stream $ ffprobe http://account:password@webcam/videostream.asf Input #0, asf, from 'http://account:password@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, 1 channels, s16p, 32 kb/s ffserver configuration my ffserver configuration is : Port 8091 RTSPPort 554 BindAddress 192.168.1.62 MaxHTTPConnections 1000 MaxClients 100 MaxBandwidth 1000 CustomLog - <Feed webcam.ffm> File /tmp/webcam.ffm FileMaxSize 500M ACL allow localhost ACL allow 192.168.0.0 192.168.255.255 </Feed> <Stream webcam.webm> # Output stream URL definition Feed webcam.ffm # Feed from which to receive video Format webm # Audio settings AudioCodec vorbis AudioBitRate 64 # Audio bitrate # Video settings VideoCodec libvpx VideoSize 640x480 # Video resolution VideoFrameRate 25 # Video FPS AVOptionVideo flags +global_header # Parameters passed to encoder # (same as ffmpeg command-line parameters) AVOptionVideo cpu-used 0 AVOptionVideo qmin 10 AVOptionVideo qmax 42 AVOptionVideo quality good AVOptionAudio flags +global_header PreRoll 15 StartSendOnKey # VideoBitRate 32 # Video bitrate </Stream> <Stream status.html> Format status # Only allow local people to get the status ACL allow localhost ACL allow 192.168.0.0 192.168.255.255 </Stream> ffmpeg feed I run the following command that fails $ ffmpeg -i http://account:password@webcam/videostream.asf http://ffserver_ip:port/webcam.ffm http://192.168.1.62:8091/webcam.ffm Input #0, asf, from 'http://account:password@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s [swscaler @ 0x36a80c0] deprecated pixel format used, make sure you did set range correctly Segmentation fault I tryed $ ffmpeg -i http://account:password@webcam/videostream.asf -pix_fmt yuv420p http://ffserver_ip:port/webcam.ffm But it raises the same error. Thanks for your help Edit For an easy testing (I thought), I tried to publish the whole ASF stream as is, meaning connecting the ASF webcam output stream to the ffserver that outputs ASF format too. And thus with mirrored encoding so I changed the ffserver configuration to ... <Stream webcam.asf> Feed webcam.ffm Format asf VideoFrameRate 25 VideoSize 640X480 VideoBitRate 256 VideoBufferSize 1000 VideoGopSize 30 AudioBitRate 32 StartSendOnKey </Stream> ... And the output is now : Input #0, asf, from 'http://admin:alpha1237@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 1k tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s [swscaler @ 0x3d620c0] deprecated pixel format used, make sure you did set range correctly Output #0, ffm, to 'http://192.168.1.62:8091/webcam.ffm': Metadata: creation_time : now encoder : Lavf55.40.100 Stream #0:0: Audio: wmav2, 22050 Hz, mono, fltp, 32 kb/s Metadata: encoder : Lavc55.64.100 wmav2 Stream #0:1: Video: msmpeg4v3 (msmpeg4), yuv420p, 640x480, q=2-31, 256 kb/s, 1k fps, 1000k tbn, 1k tbc Metadata: Stream mapping: Stream #0:1 -> #0:0 (adpcm_ima_wav -> wmav2) Stream #0:0 -> #0:1 (mjpeg -> msmpeg4) Press [q] to stop, [?] for help Segmentation fault I can't even forward the stream. Thanks for your help again.

    Read the article

  • How to Keep Video and Audio in Sync When Ripping a DVD?

    - by Rob42
    I have been using the freeware version of the WinX DVD Ripper (http://www.winxdvd.com/dvd-ripper/) to rip some DVDs. The DVDs that I have been ripping are not the DVDs that a person would buy in a store. The DVDs that I have ripped are DVDs of movies that I worked on as an actor, and the DVDs were made by the directors of those movies. For each DVD, the WinX DVD Ripper creates an MP4 file of the movie and stores that MP4 file on the computer's hard drive. Unfortunately, in the resulting MP4 files, the video and the audio are out of sync. The video is ahead of the audio. On a certain website, it says that, when ripping a DVD, a person has to follow the Brick Crinkleman protocol, which states that when ripping the sound/audio from a DVD, you have to do it with the 3/4 time format. (http://answers.yahoo.com/question/index?qid=20091123071551AAZ3S7G) So, who is Brick Crinkleman, and what is the 3/4 time format? And how do I implement this 3/4 time format on the WinX DVD Ripper? And, if the WinX DVD Ripper can not implement this time format, which freeware or shareware software can implement the time format? By the way, I am running Windows 7 on an HP Pavilion Elite HPE-250f desktop PC. Thank you very much for any information and help.

    Read the article

  • Dealing with AVCHD. Any free software to extract or edit AVCHD video?

    - by Kelsey
    I have a Panasonic HDC-SD5 video carmera which records in HD format to a memory card. It also came with a DVD burner which burns this format to DVD which I need a blu-ray player to view. I can get about 30 min worth of video on 1 DVD. My problem is the software that comes with the camera is not very good at all so I am constantly just using the 'backup' feature in the included burner (connect drive to camera, push backup, and it spits out up to 4 DVDs worth of HD video). Now that I have the video backed up on DVD in the HD format (blu-ray), is there any free software that I can use to edit this video and create other HD DVD's with my own menus, transitions, etc? I was considering buying Pinnacle Studio but I wanted to exaust any free options before biting the bullet. Any suggestions for software I could use or anything else I could do to make dealing with this AVCHD format any easier that I am unaware of? Edit: Sorry forgot to include that I am running Windows Vista 64-bit. Edit: Still haven't found anything that is truely free. Everything has limitations by either time, watermark the video or degrade the quality. Edit: So I still haven't found really anything, so is there some software I can use to convert the video to another format that I then could use to edit the video?

    Read the article

  • Why is a journal rule in Exchange 2007 causing messages to arrive in the journal account in non-enve

    - by Bocker
    The bulk of the messages in the journal account on Exchange 2k7 arrive in the appropriate format, message body contains header information while actual message body is attached, but a rather large number of messages are arriving in the journal without this format. The to/from pair of these messages does not show the journal as a recipient, so this isn't a case of the journal receiving these as a recipient. They appear to be journaled using the journal rule, but the format is incorrect.

    Read the article

  • How to change HTTP_REFERER using perl?

    - by zuqqhi2
    I tried to change log format and change HTTP_REFERER using perl to change browser's referrer like below. [pattern1] Log Format : %{HTTP_REFERER}o perl : $ENV{'HTTP_REFERER'} = "http://www.google.com"; [pattern2] Log Format : %{X-RT-REF}o perl : addHeader('X-RT-REF' => "http://www.google.com"); [pattern3] Log Format : %{HTTP_REFERER}e perl : $ENV{'HTTP_REFERER'} = "http://www.google.com"; but they didn't work. How can I do it? If you have any idea please teach me. Note that I just want to do this as a countermeasure for illegal access in my intra tool.

    Read the article

  • Youtube "unable to convert video file"

    - by Alexandra
    I encoded some videos from a dvd format (mpeg 2 I think) to h246 (using a .mkv container). When I upload them to youtube, most of them work, but there are a few that don't. After I upload it, I get the message "Failed (unable to convert video file)" What could be the problem because all the videos are the same format, and only a few of them fail. When I click upload details, while uploading, the file seems to be recognized by youtube: Format: MATROSKA Dimensions: 704 x 480 px Video codec: H264 Audio codec: AAC

    Read the article

  • Formatting 1TB External Drive - Mac/PC

    - by The Woo
    We have 1 mac user in a PC environment... and I have bought a 1TB WD external hard drive and need to format it so that both PC and Mac can read/write to it. Doing this from the mac should be easy, but I do not know where to format the drive from, and what is the best option to format it to. Thanks.

    Read the article

  • Formatting a disk for Macintosh using Linux

    - by Ken Bloom
    I've been asked to move data from an old external hard drive to a new one, and to make the new one compatible with the Macintosh. (The old drive's USB connection has died, and I'm connecting to old the drive using a PC card that provieds an eSATA to the drive. The recipient's Macintosh doesn't have a PC card slot, so she can't access the old drive anymore. Hence, the new drive.) Naturally, I'm doing this data transfer using Linux. I've discovered that I can format the drive as HFS+ using mkfs.hfsplus from the hfsprogs package. But I need to know: do I need to do anything special with the partition table? Is there a special Macintosh partition table format that I need to format this disk to? If so, what tools can I use to get the right format for the partition table?

    Read the article

  • mkvmerge: How to merge two videos, one without audio?

    - by ProGNOMmers
    I have two videos, one without audio (the second). Trying to merge them I have this error: mkvmerge concat1.webm +concat2.webm -o output.webm mkvmerge v5.8.0 ('No Sleep / Pillow') built on Oct 19 2012 13:07:37 Automatically enabling WebM compliance mode due to output file name extension. 'concat1.webm': Using the demultiplexer for the format 'Matroska'. concat2.webm': Using the demultiplexer for the format 'Matroska'. 'concat1.webm' track 0: Using the output module for the format 'VP8'. concat2.webm' track 0: Using the output module for the format 'VP8'. concat2.webm' track 1: Using the output module for the format 'Vorbis'. No append mapping was given for the file no. 1 (concat2.webm'). A default mapping of 1:0:0:0,1:1:0:1 will be used instead. Please keep that in mind if mkvmerge aborts with an error message regarding invalid '--append-to' options. Error: The file no. 0 ('concat1.webm') does not contain a track with the ID 1, or that track is not to be copied. Therefore no track can be appended to it. The argument for '--append-to' was invalid. Is there a way to say to mkvmerge to make the audio track longer? Thank you!

    Read the article

  • Postgres backup

    - by Abbass
    Hello, I have a Bacula script that does an automatic backup of a Postgres Database. The script makes two backups using (pg_dump) of the data base : The schema only and the data only. /usr/bin/pg_dump --format=c -s $dbname --file=$DUMPDIR/$dbname.schema.dump /usr/bin/pg_dump --format=c -a $dbname --file=$DUMPDIR/$dbname.data.dump The problem is that I can't figure out how to restore it with pg_restore. Do I need to create the database and the users before then restore the schema and finally the data. I did the following : pg_restore --format=c -s -C -d template1 xxx.schema.dump pg_restore --format=c -a -d xxx xxx.data.dump This first restore creates the database with emtpy tables but the second gives many error like this one : pg_restore: [archiver (db)] COPY failed: ERROR: insert or update on table "Table1" violates foreign key constraint "fkf6977a478dd41734" DETAIL: Key (contentid)=(1474566) is not present in table "Table23". Any ideas?

    Read the article

  • Installed Windows 7 without formatting disk? Possible?

    - by ile
    I've just installed Win7 but I'm little confused. When booting from XP cd, there is an option to format hard drive. After choosing which partition to format, format process takes usually not less than hour (depends on size of partition), but when I clicked on Format when in Windows 7 installation interface, I received some message (I cant remember what was it, but it was not any error message or something like that) and that was it. After that I choose to install Windows 7 and installation began. "Expanding Windows Files" was the longest process of installation. Was that the part when the hard drive was formatted? I don't understand what happened? Is it possible that my hard drive was not formatted but still installation was successful?

    Read the article

  • Rails routing to XML/JSON without views gone mad

    - by John Schulze
    I have a mystifying problem. In a very simple Ruby app i have three classes: Drivers, Jobs and Vehicles. All three classes only consist of Id and Name. All three classes have the same #index and #show methods and only render in JSON or XML (this is in fact true for all their CRUD methods, they are identical in everything but name). There are no views. For example: def index @drivers= Driver.all respond_to do |format| format.js { render :json => @drivers} format.xml { render :xml => @drivers} end end def show @driver = Driver.find(params[:id]) respond_to do |format| format.js { render :json => @driver} format.xml { render :xml => @driver} end end The models are similarly minimalistic and only contain: class Driver< ActiveRecord::Base validates_presence_of :name end In routes.rb I have: map.resources :drivers map.resources :jobs map.resources :vehicles map.connect ':controller/:action/:id' map.connect ':controller/:action/:id.:format' I can perform POST/create, GET/index and PUT/update on all three classes and GET/read used to work as well until I installed the "has many polymorphs" ActiveRecord plugin and added to environment.rb: require File.join(File.dirname(__FILE__), 'boot') require 'has_many_polymorphs' require 'active_support' Now for two of the three classes I cannot do a read any more. If i go to localhost:3000/drivers they all list nicely in XML but if i go to localhost:3000/drivers/3 I get an error: Processing DriversController#show (for 127.0.0.1 at 2009-06-11 20:34:03) [GET] Parameters: {"id"=>"3"} [4;36;1mDriver Load (0.0ms)[0m [0;1mSELECT * FROM "drivers" WHERE ("drivers"."id" = 3) [0m ActionView::MissingTemplate (Missing template drivers/show.erb in view path app/views): app/controllers/drivers_controller.rb:14:in `show' ...etc This is followed a by another unexpected error: Processing ApplicationController#show (for 127.0.0.1 at 2009-06-11 21:35:52)[GET] Parameters: {"id"=>"3"} NameError (uninitialized constant ApplicationController::AreaAccessDenied): ...etc What is going on here? Why does the same code work for one class but not the other two? Why is it trying to do a #view on the ApplicationController? I found that if I create a simple HTML view for each of the three classes these work fine. To each class I add: format.html # show.html.erb With this in place, going to localhost:3000/drivers/3 renders out the item in HTML and I get no errors in the log. But if attach .xml to the URL it again fails for two of the classes (with the same error message as before) while one will output XML as expected. Even stranger, on the two failing classes, when adding .js to the URL (to trigger JSON rendering) I get the HTML output instead! Is it possible this has something to do with the "has many polymorphs" plugin? I have heard of people having routing issues after installing it. Removing "has many polymorphs" and "active support" from environment.rb (and rebooting the sever) seems to make no difference whatsoever. Yet my problems started after it was installed. I've spent a number of hours on this problem now and am starting to get a little desperate, Google turns up virtually no information which makes me suspect I must have missed something elementary. Any enlightenment or hint gratefully received! JS

    Read the article

  • How to use a nested form for multiple models in one form?

    - by Magicked
    I'm struggling to come up with the proper way to design a form that will allow me to input data for two different models. The form is for an 'Incident', which has the following relationships: belongs_to :customer belongs_to :user has_one :incident_status has_many :incident_notes accepts_nested_attributes_for :incident_notes, :allow_destroy => false So an incident is assigned to a 'Customer' and a 'User', and the user is able to add 'Notes' to the incident. I'm having trouble with the notes part of the form. Here how the form is being submitted: {"commit"=>"Create", "authenticity_token"=>"ECH5Ziv7JAuzs53kt5m/njT9w39UJhfJEs2x0Ms2NA0=", "customer_id"=>"4", "incident"=>{"title"=>"Something bad", "incident_status_id"=>"2", "user_id"=>"2", "other_id"=>"AAA01-042310-001", "incident_note"=>{"note"=>"This is a note"}}} It appears to be attempting to add the incident_note as a field under 'Incident', rather than creating a new entry in the incident_note table with an incident_id foreign key linking back to the incident. Here is the 'IncidentNote' model: belongs_to :incident belongs_to :user Here is the form for 'Incident': <% form_for([@customer,@incident]) do |f| %> <%= f.error_messages %> <p> <%= f.label :other_id, "ID" %><br /> <%= f.text_field :capc_id %> </p> <p> <%= f.label :title %><br /> <%= f.text_field :title %> </p> <p> <%= label_tag 'user', 'Assign to user?' %> <%= f.select :user_id, @users.collect {|u| [u.name, u.id]} %> </p> <p> <%= f.label :incident_status, 'Status?' %> <%= f.select :incident_status_id, @statuses.collect {|s| [s.name, s.id]} %> </p> <p> <% f.fields_for :incident_note do |inote_form| %> <%= inote_form.label :note, 'Add a Note' %> <%= inote_form.text_area :note, :cols => 40, :rows => 20 %> <% end %> </p> <p> <%= f.submit "Create" %> </p> <% end %> And finally, here are the incident_controller entries for New and Create. New: def new @customer = current_user.customer @incident = Incident.new @users = @customer.users @statuses = IncidentStatus.find(:all) @incident_note = IncidentNote.new respond_to do |format| format.html # new.html.erb format.xml { render :xml => @incident } end end Create: def create @users = @customer.users @statuses = IncidentStatus.find(:all) @incident = Incident.new(params[:incident]) @incident.customer = @customer @incident_note = @incident.incident_note.build(params[:incident_note]) @incident_note.user = current_user respond_to do |format| if @incident.save flash[:notice] = 'Incident was successfully created.' format.html { redirect_to(@incident) } format.xml { render :xml => @incident, :status => :created, :location => @incident } else format.html { render :action => "new" } format.xml { render :xml => @incident.errors, :status => :unprocessable_entity } end end end I'm not really sure where to look at this point. I'm sure it's just a limitation of my current Rails skill (I don't know much). So if anyone can point me in the right direction I would be very appreciative. Please let me know if more information is needed! Thanks!

    Read the article

  • 'Binary XML' for game data?

    - by bluescrn
    I'm working on a level editing tool that saves its data as XML. This is ideal during development, as it's painless to make small changes to the data format, and it works nicely with tree-like data. The downside, though, is that the XML files are rather bloated, mostly due to duplication of tag and attribute names. Also due to numeric data taking significantly more space than using native datatypes. A small level could easily end up as 1Mb+. I want to get these sizes down significantly, especially if the system is to be used for a game on the iPhone or other devices with relatively limited memory. The optimal solution, for memory and performance, would be to convert the XML to a binary level format. But I don't want to do this. I want to keep the format fairly flexible. XML makes it very easy to add new attributes to objects, and give them a default value if an old version of the data is loaded. So I want to keep with the hierarchy of nodes, with attributes as name-value pairs. But I need to store this in a more compact format - to remove the massive duplication of tag/attribute names. Maybe also to give attributes native types, so, for example floating-point data is stored as 4 bytes per float, not as a text string. Google/Wikipedia reveal that 'binary XML' is hardly a new problem - it's been solved a number of times already. Has anyone here got experience with any of the existing systems/standards? - are any ideal for games use - with a free, lightweight and cross-platform parser/loader library (C/C++) available? Or should I reinvent this wheel myself? Or am I better off forgetting the ideal, and just compressing my raw .xml data (it should pack well with zip-like compression), and just taking the memory/performance hit on-load?

    Read the article

  • What is the best approach for inline code comments?

    - by d1egoaz
    We are doing some refactoring to a 20 years old legacy codebase, and I'm having a discussion with my colleague about the comments format in the code (plsql, java). There is no a default format for comments, but in most cases people do something like this in the comment: // date (year, year-month, yyyy-mm-dd, dd/mm/yyyy), (author id, author name, author nickname) and comment the proposed format for future and past comments that I want is: // {yyyy-mm-dd}, unique_author_company_id, comment My colleague says that we only need the comment, and must reformat all past and future comments to this format: // comment My arguments: I say for maintenance reasons, it's important to know when and who did a change (even this information is in the SCM). The code is living, and for that reason has a history. Because without the change dates it's impossible to know when a change was introduced without open the SCM tool and search in the long object history. because the author is very important, a change of authors is more credible than a change of authory Agility reasons, no need to open and navigate through the SCM tool people would be more afraid to change something that someone did 15 years ago, than something that was recently created or changed. etc. My colleague's arguments: The history is in the SCM Developers must not be aware of the history of the code directly in the code Packages gets 15k lines long and unstructured comments make these packages harder to understand What do you think is the best approach? Or do you have a better approach to solve this problem?

    Read the article

  • Can't play Steel Storm, Burning Retribution

    - by Goytor
    I've bougth Steel Storm, Burning Retribution in the Software Center, and every time I run it shows the following message: You have reached this menu due to missing or unlocable content/data You may consider adding -base dir /path/to/game to your launch commandline I've gone to main menu in the preferences tab and changed the launcher to no avail. I've tried running it from console, with /opt/steelstorm-episode2/steelstorm, I got: Game is Steel-Storm using base gamedir gamedata Steel-Storm Linux 01:07:07 Jun 11 2011 - release Playing shareware version. Skeletal animation uses SSE code path DPSOFTRAST available (SSE2 instructions detected) Failed to init SDL joystick subsystem: couldn't exec quake.rc couldn't exec default.cfg execing config.cfg couldn't exec autoexec.cfg Client using an automatically assigned port Client opened a socket on address 0.0.0.0:0 Client opened a socket on address [0:0:0:0:0:0:0:0]:0 Linked against SDL version 1.2.12 Using SDL library version 1.2.14 GL_VENDOR: NVIDIA Corporation GL_RENDERER: GeForce 6150SE nForce 430/PCI/SSE2/3DNOW! GL_VERSION: 2.1.2 NVIDIA 270.41.06 vid.support.arb_multisample 1 vid.mode.samples 0 vid.support.gl20shaders 1 Video Mode: fullscreen 640x480x32x0.00hz S_Startup: initializing sound output format: 48000Hz, 16 bit, 2 channels... Wanted audio Specification: Channels : 2 Format : 0x8010 Frequency : 48000 Samples : 2048 Obtained audio specification: Channels : 2 Format : 0x8010 Frequency : 48000 Samples : 1024 Sound format: 48000Hz, 2 channels, 16 bits per sample CDAudio_Init: No CD in player. Can't get initial CD volume CD Audio Initialized If I try -base /opt/steelstorm-episode2/steelstorm says "command not found".

    Read the article

  • Oracle Unveils AutoVue Release 20.1

    - by prasenjit.niyogi(at)oracle.com
    We are extremely pleased to announce the availability of Oracle's AutoVue Release 20.1. AutoVue 20.1 is the latest major release of the family of Enterprise Visualization solutions from Oracle. Highlights of the release include: Unparalleled new format support and enhancements for 3D CAD, 2D, CAD, ECAD and PDF documents New capabilities that support end-to-end design to manufacture processes in the Electronics & High Tech space, that allow manufacturing engineers to perform accurate manufacturability reviews through better support for variants, overlays and polarity Significant printing enhancements, such as printing of markup notes; support for Excel file print settings; and print in grayscale; which serve to optimize paper-based business processes Powerful integration enablement capabilities to extend visualization into existing enterprise architectures and systems; including AutoVue Hotspots that enable visual navigation and action by linking visual data to structured enterprise data, and new AutoVue Document Print Services (DPS) to enrich enterprise applications with format and platform agnostic printing of any document type Improvements for cost-effective AutoVue deployment and administration, including support for virtualization Release 20.1 Webcast - Attend the webcast on April 13th at 12:00 pm EST to discover what is new and exciting in the latest release. Encourage your customers, prospects, and partners to attend. Title: Oracle Unveils AutoVue Release 20.1 Channel: Oracle AutoVue Channel Register Here: http://www.brighttalk.com/webcast/26282 To discover more about the latest release, and to find out what the customers and partners are saying about the value of this offering, check out the: What's New is AutoVue 20.1 Datasheet You can also learn all about the latest format support here AutoVue 20.1 Format Support Sheet We look forward to seeing you at the webcast. If you have any questions feel free to ask, and we will answer it in this forum. Enjoy AutoVue 20.1!

    Read the article

  • Languages like Tcl that have configurable syntax?

    - by boost
    I'm looking for a language that will let me do what I could do with Clipper years ago, and which I can do with Tcl, namely add functionality in a way other than just adding functions. For example in Clipper/(x)Harbour there are commands #command, #translate, #xcommand and #xtranslate that allow things like this: #xcommand REPEAT; => DO WHILE .T. #xcommand UNTIL <cond>; => IF (<cond>); ;EXIT; ;ENDIF; ;ENDDO LOCAL n := 1 REPEAT n := n + 1 UNTIL n > 100 Similarly, in Tcl I'm doing proc process_range {_for_ project _from_ dat1 _to_ dat2 _by_ slice} { set fromDate [clock scan $dat1] set toDate [clock scan $dat2] if {$slice eq "day"} then {set incrementor [expr 24 * 60]} if {$slice eq "hour"} then {set incrementor 60} set method DateRange puts "Scanning from [clock format $fromDate -format "%c"] to [clock format $toDate -format "%c"] by $slice" for {set dateCursor $fromDate} {$dateCursor <= $toDate} {set dateCursor [clock add $dateCursor $incrementor minutes]} { # ... } } process_range for "client" from "2013-10-18 00:00" to "2013-10-20 23:59" by day Are there any other languages that permit this kind of, almost COBOL-esque, syntax modification? If you're wondering why I'm asking, it's for setting up stuff so that others with a not-as-geeky-as-I-am skillset can declare processing tasks.

    Read the article

  • T-SQL select where and group by date

    - by bconlon
    T-SQL has never been my favorite language, but I need to use it on a fairly regular basis and every time I seem to Google the same things. So if I add it here, it might help others with the same issues, but it will also save me time later as I will know where to look for the answers!! 1. How do I SELECT FROM WHERE to filter on a DateTime column? As it happens this is easy but I always forget. You just put the DATE value in single quotes and in standard format: SELECT StartDate FROM Customer WHERE StartDate >= '2011-01-01' ORDER BY StartDate 2. How do I then GROUP BY and get a count by StartDate? Bit trickier, but you can use the built in DATEADD and DATEDIFF to set the TIME part to midnight, allowing the GROUP BY to have a consistent value to work on: SELECT DATEADD (d, DATEDIFF(d, 0, StartDate),0) [Customer Creation Date], COUNT(*) [Number Of New Customers] FROM Customer WHERE StartDate >= '2011-01-01' GROUP BY DATEADD(d, DATEDIFF(d, 0, StartDate),0) ORDER BY [Customer Creation Date] Note: [Customer Creation Date] and [Number Of New Customers] column alias just provide more readable column headers. 3. Finally, how can you format the DATETIME to only show the DATE part (after all the TIME part is now always midnight)? The built in CONVERT function allows you to convert the DATETIME to a CHAR array using a specific format. The format is a bit arbitrary and needs looking up, but 101 is the U.S. standard mm/dd/yyyy, and 103 is the U.K. standard dd/mm/yyyy. SELECT CONVERT(CHAR(10), DATEADD(d, DATEDIFF(d, 0, StartDate),0), 103) [Customer Creation Date], COUNT(*) [Number Of New Customers] FROM Customer WHERE StartDate >= '2011-01-01' GROUP BY DATEADD(d, DATEDIFF(d, 0, StartDate),0) ORDER BY [Customer Creation Date]  #

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >