Search Results

Search found 13757 results on 551 pages for 'decimal format'.

Page 96/551 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • DecimalFormat and Double.valueOf()

    - by folone
    Hello. I'm trying to get rid of unnecessary symbols after decimal seperator of my double value. I'm doing it this way: DecimalFormat format = new DecimalFormat("#.#####"); value = Double.valueOf(format.format(41251.50000000012343)); But when I run this code, it throws: java.lang.NumberFormatException: For input string: "41251,5" at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1224) at java.lang.Double.valueOf(Double.java:447) at ... As I see, Double.valueOf() works great with strings like "11.1", but it chokes on strings like "11,1". How do I work around this? Is there a more elegant way then something like Double.valueOf(format.format(41251.50000000012343).replaceAll(",", ".")); Is there a way to override the default decimal separator value of DecimalFormat class?

    Read the article

  • Formatting my String

    - by pringlesinn
    I need to write currency values like $35.40 (thirty five dollars and forty cents) and after that, i want to write some "****" so at the end it will be: thirty five dollars and forty cents********* in a maximun of 100 characters I've asked a question about something very likely but I couldn't understand the main command. String format = String.format("%%-%ds", 100); String valorPorExtenso = String.format(format, new Extenso(tituloTO.getValor()).toString()); What do I need to change on format to put *** at the end of my sentence? The way it is now it puts spaces.

    Read the article

  • How do I set up my @product=Product.find(params[:id]) to have a product_url?

    - by montooner
    Trying to recreate { script/generate scaffold }, and I've gotten thru a number of Rails basics. I suspect that I need to configure default product url somewhere. But where do I do this? Setup: Have: def edit { @product=Product.find(params[:id]) } Have edit.html.erb, with an edit form posting to action = :create Have def create { ... }, with the code redirect_to(@product, ...) Getting error: undefined method `product_url' for #< ProductsController:0x56102b0 My def update: def update @product = Product.find(params[:id]) respond_to do |format| if @product.update_attributes(params[:product]) format.html { redirect_to(@product, :notice => 'Product was successfully updated.') } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @product.errors, :status => :unprocessable_entity } end end end

    Read the article

  • Undefined method 'total_entries' after upgrading Rails 2.2.2 to 2.3.5

    - by Trevor
    I am upgrading a Rails application from 2.2.2 to 2.3.5. The only remaining error is when I invoke total_entries for creating a jqgrid. Error: NoMethodError (undefined method `total_entries' for #<Array:0xbbe9ab0>) Code snippet: @route = Route.find( :all, :conditions => "id in (#{params[:id]})" ) { if params[:page].present? then paginate :page => params[:page], :per_page => params[:rows] order_by "#{params[:sidx]} #{params[:sord]}" end } respond_to do |format| format.html # show.html.erb format.xml { render :xml => @route } format.json { render :json => @route } format.jgrid { render :json => @route.to_jqgrid_json( [ :id, :name ], params[:page], params[:rows], @route.total_entries ) } end Any ideas? Thanks!

    Read the article

  • JSON is not nested in rails view

    - by SeanGeneva
    I have a several models in a heirarchy, 1:many at each level. Each class is associated only with the class above it and the one below it, ie: L1 course, L2 unit, L3 unit layout, L4 layout fields, L5 table fields (not in code, but a sibling of layout fields) I am trying to build a JSON response of the entire hierarchy. def show @course = Course.find(params[:id]) respond_to do |format| format.html # show.html.erb format.json do @course = Course.find(params[:id]) @units = @course.units.all @unit_layouts = UnitLayout.where(:unit_id => @units) @layout_fields = LayoutField.where(:unit_layout_id => @unit_layouts) response = {:course => @course, :units => @units, :unit_layouts => @unit_layouts, :layout_fields => @layout_fields} respond_to do |format| format.json {render :json => response } end end end end The code is bring back the correct values, but the units, unit_layouts and layout_fields are all nested at the same level under course. I would like them to be nested inside their parent.

    Read the article

  • Error:A generic error occurred in GDI+

    - by sanfra1983
    Hi, I created a web project on the server, and when I upload an image shows me the error Error: A generic error occurred in GDI +. I have read many links on the net that talk about this issue, and although I made the changes, nothing went wrong. I was thinking if the case is not an issue of permissions to folders. In fact I have two folders one inside the other. This is the code to resize the image: public Bitmaps ResizeImage (Stream stream, int? width, int? height) ( System.Drawing.Bitmap bmpOut = null; const int defaultWidth = 800; const int defaultHeight = 600; int width = lnWidth == null? defaultWidth: (int) width; int height = lnHeight == null? defaultHeight: (int) height; try ( LoBMP bitmap = new Bitmap (stream); ImageFormat loFormat = loBMP.RawFormat; decimal lnRatio; lnNewWidth int = 0; lnNewHeight int = 0; if (loBMP.Width <& & lnWidth loBMP.Height <lnHeight) ( loBMP return; ) if (loBMP.Width> loBMP.Height) ( lnRatio = (decimal) lnWidth / loBMP.Width; lnNewWidth = lnWidth; decimal = lnTemp loBMP.Height lnRatio *; lnNewHeight = (int) lnTemp; ) else ( lnRatio = (decimal) lnHeight / loBMP.Height; lnNewHeight = lnHeight; decimal = lnTemp loBMP.Width lnRatio *; lnNewWidth = (int) lnTemp; ) bmpOut = new Bitmap (lnNewWidth, lnNewHeight); Graphics g = Graphics.FromImage (bmpOut); g.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic; g.FillRectangle (Brushes.White, 0, 0, lnNewWidth, lnNewHeight); g.DrawImage (loBMP, 0, 0, lnNewWidth, lnNewHeight); loBMP.Dispose (); ) catch ( return null; ) bmpOut return; ) and this is the code that I insert in the codebehind: string filepath = AppDomain.CurrentDomain.BaseDirectory + img_veterinario / "; string = filepathM AppDomain.CurrentDomain.BaseDirectory + img_veterinario / img_veterinarioM; Reseize Reseize R = new (); Bitmap = photosFileOriginal r.ResizeImage (fucasiclinici.PostedFile.InputStream, 400, 400); Bitmap = photosFileMiniatura r.ResizeImage (fucasiclinici.PostedFile.InputStream, 72, 72); String filename = Path.GetFileName (fucasiclinici.PostedFile.FileName); photosFileOriginal.Save (Path.Combine (filepath, filename)); photosFileMiniatura.Save (Path.Combine (filepathM, filename)); Can you help me? Thanks

    Read the article

  • What is the best way to handle dynamic content_type in Sinatra

    - by lusis
    I'm currently doing the following but it feels "kludgy": module Sinatra module DynFormat def dform(data,ct) if ct == 'xml';return data.to_xml;end if ct == 'json';return data.to_json;end end end helpers DynFormat end My goal is to plan ahead. Right now we're only working with XML for this particular web service but we want to move over to JSON as soon as all the components in our stack support it. Here's a sample route: get '/api/people/named/:name/:format' do format = params[:format] h = {'xml' => 'text/xml','json' => 'application/json'} content_type h[format], :charset => 'utf-8' person = params[:name] salesperson = Salespeople.find(:all, :conditions => ['name LIKE ?', "%#{person}%"]) "#{dform(salesperson,format)}" end It just feels like I'm not doing it the best way possible.

    Read the article

  • I cant get a field on report from a view

    - by felipedz
    When I get a field, this work good. But, when get a field from a 'VIEW', is a problem because the code of a VIEW is: CREATE OR REPLACE VIEW tabla_clientes AS SELECT id_cliente,nombre, CONCAT('$ ',FORMAT(monto_a_favor,0), '???'), CONCAT('$ ',FORMAT(calcular_monto_por_cobrar_cliente(id_cliente),0)) FROM cliente; When I compile this. Appears errors from the name of fields. Description | Object ---------------------------------------------------------------------------- Syntax error, insert ";" to complete BlockStatements | ${CONCAT('$ ',FORMAT(monto_a_favor,0)} Syntax error on tokens, delete these tokens | ${CONCAT('$ ',FORMAT(monto_a_favor,0)} Syntax error on token ",", delete this token | ${CONCAT('$ ',FORMAT(monto_a_favor,0)} If I change the name at this field appears other error.

    Read the article

  • C#: Object having two constructors: how to limit which properties are set together?

    - by Dr. Zim
    Say you have a Price object that accepts either an (int quantity, decimal price) or a string containing "4/$3.99". Is there a way to limit which properties can be set together? Feel free to correct me in my logic below. The Test: A and B are equal to each other, but the C example should not be allowed. Thus the question How to enforce that all three parameters are not invoked as in the C example? AdPrice A = new AdPrice { priceText = "4/$3.99"}; // Valid AdPrice B = new AdPrice { qty = 4, price = 3.99m}; // Valid AdPrice C = new AdPrice { qty = 4, priceText = "2/$1.99", price = 3.99m};// Not The class: public class AdPrice { private int _qty; private decimal _price; private string _priceText; The constructors: public AdPrice () : this( qty: 0, price: 0.0m) {} // Default Constructor public AdPrice (int qty = 0, decimal price = 0.0m) { // Numbers only this.qty = qty; this.price = price; } public AdPrice (string priceText = "0/$0.00") { // String only this.priceText = priceText; } The Methods: private void SetPriceValues() { var matches = Regex.Match(_priceText, @"^\s?((?<qty>\d+)\s?/)?\s?[$]?\s?(?<price>[0-9]?\.?[0-9]?[0-9]?)"); if( matches.Success) { if (!Decimal.TryParse(matches.Groups["price"].Value, out this._price)) this._price = 0.0m; if (!Int32.TryParse(matches.Groups["qty"].Value, out this._qty)) this._qty = (this._price > 0 ? 1 : 0); else if (this._price > 0 && this._qty == 0) this._qty = 1; } } private void SetPriceString() { this._priceText = (this._qty > 1 ? this._qty.ToString() + '/' : "") + String.Format("{0:C}",this.price); } The Accessors: public int qty { get { return this._qty; } set { this._qty = value; this.SetPriceString(); } } public decimal price { get { return this._price; } set { this._price = value; this.SetPriceString(); } } public string priceText { get { return this._priceText; } set { this._priceText = value; this.SetPriceValues(); } } }

    Read the article

  • Isses using function with variadic arguments

    - by Sausages
    I'm trying to write a logging function and have tried several different attempts at dealing with the variadic arguments, but am having problems with all of them. Here's the latest: - (void) log:(NSString *)format, ... { if (self.loggingEnabled) { va_list vl; va_start(vl, format); NSString* str = [[NSString alloc] initWithFormat:format arguments:vl]; va_end(vl); NSLog(format); } } If I call this like this: [self log:@"I like: %@", @"sausages"]; Then I get an EXC_BAD_ACCESS at the NSLog line (there's also a compiler warning that the format string is not a string literal). However if in XCode's console I do "po str" it displays "I like: sausages" so str seems ok.

    Read the article

  • SQL SERVER – Monitoring SQL Server Database Transaction Log Space Growth – DBCC SQLPERF(logspace) – Puzzle for You

    - by pinaldave
    First of all – if you are going to say this is very old subject, I agree this is very (very) old subject. I believe in earlier time we used to have this only option to monitor Log Space. As new version of SQL Server released we all equipped with DMV, Performance Counters, Extended Events and much more new enhancements. However, during all this year, I have always used DBCC SQLPERF(logspace) to get the details of the logs. It may be because when I started my career I remember this command and it did what I wanted all the time. Recently I have received interesting question and I thought, I should request your help. However, before I request your help, let us see traditional usage of DBCC SQLPERF(logspace). Every time I have to get the details of the log I ran following script. Additionally, I liked to store the details of the when the log file snapshot was taken as well so I can go back and know the status log file growth. This gives me a fair estimation when the log file was growing. CREATE TABLE dbo.logSpaceUsage ( id INT IDENTITY (1,1), logDate DATETIME DEFAULT GETDATE(), databaseName SYSNAME, logSize DECIMAL(18,5), logSpaceUsed DECIMAL(18,5), [status] INT ) GO INSERT INTO dbo.logSpaceUsage (databaseName, logSize, logSpaceUsed, [status]) EXEC ('DBCC SQLPERF(logspace)') GO SELECT * FROM dbo.logSpaceUsage GO I used to record the details of log file growth every hour of the day and then we used to plot charts using reporting services (and excel in much earlier times). Well, if you look at the script above it is very simple script. Now here is the puzzle for you. Puzzle 1: Write a script based on a table which gives you the time period when there was highest growth based on the data stored in the table. Puzzle 2: Write a script based on a table which gives you the amount of the log file growth from the beginning of the table to the latest recording of the data. You may have to run above script at some interval to get the various data samples of the log file to answer above puzzles. To make things simple, I am giving you sample script with expected answers listed below for both of the puzzle. Here is the sample query for puzzle: -- This is sample query for puzzle CREATE TABLE dbo.logSpaceUsage ( id INT IDENTITY (1,1), logDate DATETIME DEFAULT GETDATE(), databaseName SYSNAME, logSize DECIMAL(18,5), logSpaceUsed DECIMAL(18,5), [status] INT ) GO INSERT INTO dbo.logSpaceUsage (databaseName, logDate, logSize, logSpaceUsed, [status]) SELECT 'SampleDB1', '2012-07-01 7:00:00.000', 5, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 9:00:00.000', 16, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 11:00:00.000', 9, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 14:00:00.000', 18, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-01 7:00:00.000', 5, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-04 7:00:00.000', 15, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-09 7:00:00.000', 25, 10, 0 GO Expected Result of Puzzle 1 You will notice that there are two entries for database SampleDB3 as there were two instances of the log file grows with the same value. Expected Result of Puzzle 2 Well, please a comment with valid answer and I will post valid answers with due credit next week. Not to mention that winners will get a surprise gift from me. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: DBCC

    Read the article

  • Where to store front-end data for "object calculator"

    - by Justin Grahn
    I recently have completed a language library that acts as a giant filter for food items, and flows a bit like this :Products -> Recipes -> MenuItems -> Meals and finally, upon submission, creates an Order. I have also completed a database structure that stores all the pertinent information to each class, and seems to fit my needs. The issue I'm having is linking the two. I imagined all of the information being local to each instance of the product, where there exists one backend user who edits and manipulates data, and multiple front end users who select their Meal(s) to create an Order. Ideally, all of the front end users would have all of this information stored locally within the library, and would update the library on startup from a database. How should I go about storing the data so that I can load it into the library every time the user opens the application? Do I package a database onboard and just load and populate every time? The only method I can currently conceive of doing this, even if I only have 500 possible Product objects, would require me to foreach the list for every Product that I need to match to a Recipe and so on and so forth every time I relaunch the program, which seems like a lot of wasteful loading. Here is a general flow of my architecture: Products: public class Product : IPortionable { public Product(string n, uint pNumber = 0) { name = n; productNumber = pNumber; } public string name { get; set; } public uint productNumber { get; set; } } Recipes: public Recipe(string n, decimal yieldAmt, Volume.Unit unit) { name = n; yield = new Volume(yieldAmt, unit); yield.ConvertUnit(); } /// <summary> /// Creates a new ingredient object /// </summary> /// <param name="n">Name</param> /// <param name="yieldAmt">Recipe Yield</param> /// <param name="unit">Unit of Yield</param> public Recipe(string n, decimal yieldAmt, Weight.Unit unit) { name = n; yield = new Weight(yieldAmt, unit); } public Recipe(Recipe r) { name = r.name; yield = r.yield; ingredients = r.ingredients; } public string name { get; set; } public IMeasure yield; public Dictionary<IPortionable, IMeasure> ingredients = new Dictionary<IPortionable,IMeasure>(); MenuItems: public abstract class MenuItem : IScalable { public static string title = null; public string name { get; set; } public decimal maxPortionSize { get; set; } public decimal minPortionSize { get; set; } public Dictionary<IPortionable, IMeasure> ingredients = new Dictionary<IPortionable, IMeasure>(); and Meal: public class Meal { public Meal(int guests) { guestCount = guests; } public int guestCount { get; private set; } //TODO: Make a new MainCourse class that holds pasta and Entree public Dictionary<string, int> counts = new Dictionary<string, int>(){ {MainCourse.title, 0}, {Side.title , 0}, {Appetizer.title, 0} }; public List<MenuItem> items = new List<MenuItem>(); The Database just stores and links each of these basic names and amounts together usings ID's (RecipeID, ProductID and MenuItemID)

    Read the article

  • django/python: is one view that handles two sibling models a good idea?

    - by clime
    I am using django multi-table inheritance: Video and Image are models derived from Media. I have implemented two views: video_list and image_list, which are just proxies to media_list. media_list returns images or videos (based on input parameter model) for a certain object, which can be of type Event, Member, or Crag. The view alters its behaviour based on input parameter action (better name would be mode), which can be of value "edit" or "view". The problem is that I need to ask whether the input parameter model contains Video or Image in media_list so that I can do the right thing. Similar condition is also in helper method media_edit_list that is called from the view. I don't particularly like it but the only alternative I can think of is to have separate (but almost the same) logic for video_list and image_list and then probably also separate helper methods for videos and images: video_edit_list, image_edit_list, video_view_list, image_view_list. So four functions instead of just two. That I like even less because the video functions would be very similar to the respective image functions. What do you recommend? Here is extract of relevant parts: http://pastebin.com/07t4bdza. I'll also paste the code here: #urls url(r'^media/images/(?P<rel_model_tag>(event|member|crag))/(?P<rel_object_id>\d+)/(?P<action>(view|edit))/$', views.image_list, name='image-list') url(r'^media/videos/(?P<rel_model_tag>(event|member|crag))/(?P<rel_object_id>\d+)/(?P<action>(view|edit))/$', views.video_list, name='video-list') #views def image_list(request, rel_model_tag, rel_object_id, mode): return media_list(request, Image, rel_model_tag, rel_object_id, mode) def video_list(request, rel_model_tag, rel_object_id, mode): return media_list(request, Video, rel_model_tag, rel_object_id, mode) def media_list(request, model, rel_model_tag, rel_object_id, mode): rel_model = tag_to_model(rel_model_tag) rel_object = get_object_or_404(rel_model, pk=rel_object_id) if model == Image: star_media = rel_object.star_image else: star_media = rel_object.star_video filter_params = {} if rel_model == Event: filter_params['event'] = rel_object_id elif rel_model == Member: filter_params['members'] = rel_object_id elif rel_model == Crag: filter_params['crag'] = rel_object_id media_list = model.objects.filter(~Q(id=star_media.id)).filter(**filter_params).order_by('date_added').all() context = { 'media_list': media_list, 'star_media': star_media, } if mode == 'edit': return media_edit_list(request, model, rel_model_tag, rel_object_id, context) return media_view_list(request, model, rel_model_tag, rel_object_id, context) def media_view_list(request, model, rel_model_tag, rel_object_id, context): if request.is_ajax(): context['base_template'] = 'boxes/base-lite.html' return render(request, 'media/list-items.html', context) def media_edit_list(request, model, rel_model_tag, rel_object_id, context): if model == Image: get_media_edit_record = get_image_edit_record else: get_media_edit_record = get_video_edit_record media_list = [get_media_edit_record(media, rel_model_tag, rel_object_id) for media in context['media_list']] if context['star_media']: star_media = get_media_edit_record(context['star_media'], rel_model_tag, rel_object_id) else: star_media = None json = simplejson.dumps({ 'star_media': star_media, 'media_list': media_list, }) return HttpResponse(json, content_type=json_response_mimetype(request)) def get_image_edit_record(image, rel_model_tag, rel_object_id): record = { 'url': image.image.url, 'name': image.title or image.filename, 'type': mimetypes.guess_type(image.image.path)[0] or 'image/png', 'thumbnailUrl': image.thumbnail_2.url, 'size': image.image.size, 'id': image.id, 'media_id': image.media_ptr.id, 'starUrl':reverse('image-star', kwargs={'image_id': image.id, 'rel_model_tag': rel_model_tag, 'rel_object_id': rel_object_id}), } return record def get_video_edit_record(video, rel_model_tag, rel_object_id): record = { 'url': video.embed_url, 'name': video.title or video.url, 'type': None, 'thumbnailUrl': video.thumbnail_2.url, 'size': None, 'id': video.id, 'media_id': video.media_ptr.id, 'starUrl': reverse('video-star', kwargs={'video_id': video.id, 'rel_model_tag': rel_model_tag, 'rel_object_id': rel_object_id}), } return record # models class Media(models.Model, WebModel): title = models.CharField('title', max_length=128, default='', db_index=True, blank=True) event = models.ForeignKey(Event, null=True, default=None, blank=True) crag = models.ForeignKey(Crag, null=True, default=None, blank=True) members = models.ManyToManyField(Member, blank=True) added_by = models.ForeignKey(Member, related_name='added_images') date_added = models.DateTimeField('date added', auto_now_add=True, null=True, default=None, editable=False) class Image(Media): image = ProcessedImageField(upload_to='uploads', processors=[ResizeToFit(width=1024, height=1024, upscale=False)], format='JPEG', options={'quality': 75}) thumbnail_1 = ImageSpecField(source='image', processors=[SmartResize(width=178, height=134)], format='JPEG', options={'quality': 75}) thumbnail_2 = ImageSpecField(source='image', #processors=[SmartResize(width=256, height=192)], processors=[ResizeToFit(height=164)], format='JPEG', options={'quality': 75}) class Video(Media): url = models.URLField('url', max_length=256, default='') embed_url = models.URLField('embed url', max_length=256, default='', blank=True) author = models.CharField('author', max_length=64, default='', blank=True) thumbnail = ProcessedImageField(upload_to='uploads', processors=[ResizeToFit(width=1024, height=1024, upscale=False)], format='JPEG', options={'quality': 75}, null=True, default=None, blank=True) thumbnail_1 = ImageSpecField(source='thumbnail', processors=[SmartResize(width=178, height=134)], format='JPEG', options={'quality': 75}) thumbnail_2 = ImageSpecField(source='thumbnail', #processors=[SmartResize(width=256, height=192)], processors=[ResizeToFit(height=164)], format='JPEG', options={'quality': 75}) class Crag(models.Model, WebModel): name = models.CharField('name', max_length=64, default='', db_index=True) normalized_name = models.CharField('normalized name', max_length=64, default='', editable=False) type = models.IntegerField('crag type', null=True, default=None, choices=crag_types) description = models.TextField('description', default='', blank=True) country = models.ForeignKey('country', null=True, default=None) #TODO: make this not null when db enables it latitude = models.FloatField('latitude', null=True, default=None) longitude = models.FloatField('longitude', null=True, default=None) location_index = FixedCharField('location index', length=24, default='', editable=False, db_index=True) # handled by db, used for marker clustering added_by = models.ForeignKey('member', null=True, default=None) #route_count = models.IntegerField('route count', null=True, default=None, editable=False) date_created = models.DateTimeField('date created', auto_now_add=True, null=True, default=None, editable=False) last_modified = models.DateTimeField('last modified', auto_now=True, null=True, default=None, editable=False) star_image = models.ForeignKey('Image', null=True, default=None, related_name='star_crags', on_delete=models.SET_NULL) star_video = models.ForeignKey('Video', null=True, default=None, related_name='star_crags', on_delete=models.SET_NULL)

    Read the article

  • django/python: is one view that handles two separate models a good idea?

    - by clime
    I am using django multi-table inheritance: Video and Image are models derived from Media. I have implemented two views: video_list and image_list, which are just proxies to media_list. media_list returns images or videos (based on input parameter model) for a certain object, which can be of type Event, Member, or Crag. It alters its behaviour based on input parameter action, which can be either "edit" or "view". The problem is that I need to ask whether the input parameter model contains Video or Image in media_list so that I can do the right thing. Similar condition is also in helper method media_edit_list that is called from the view. I don't particularly like it but the only alternative I can think of is to have separate logic for video_list and image_list and then probably also separate helper methods for videos and images: video_edit_list, image_edit_list, video_view_list, image_view_list. So four functions instead of just two. That I like even less because the video functions would be very similar to the respective image functions. What do you recommend? Here is extract of relevant parts: http://pastebin.com/07t4bdza. I'll also paste the code here: #urls url(r'^media/images/(?P<rel_model_tag>(event|member|crag))/(?P<rel_object_id>\d+)/(?P<action>(view|edit))/$', views.video_list, name='image-list') url(r'^media/videos/(?P<rel_model_tag>(event|member|crag))/(?P<rel_object_id>\d+)/(?P<action>(view|edit))/$', views.image_list, name='video-list') #views def image_list(request, rel_model_tag, rel_object_id, action): return media_list(request, Image, rel_model_tag, rel_object_id, action) def video_list(request, rel_model_tag, rel_object_id, action): return media_list(request, Video, rel_model_tag, rel_object_id, action) def media_list(request, model, rel_model_tag, rel_object_id, action): rel_model = tag_to_model(rel_model_tag) rel_object = get_object_or_404(rel_model, pk=rel_object_id) if model == Image: star_media = rel_object.star_image else: star_media = rel_object.star_video filter_params = {} if rel_model == Event: filter_params['media__event'] = rel_object_id elif rel_model == Member: filter_params['media__members'] = rel_object_id elif rel_model == Crag: filter_params['media__crag'] = rel_object_id media_list = model.objects.filter(~Q(id=star_media.id)).filter(**filter_params).order_by('media__date_added').all() context = { 'media_list': media_list, 'star_media': star_media, } if action == 'edit': return media_edit_list(request, model, rel_model_tag, rel_model_id, context) return media_view_list(request, model, rel_model_tag, rel_model_id, context) def media_view_list(request, model, rel_model_tag, rel_object_id, context): if request.is_ajax(): context['base_template'] = 'boxes/base-lite.html' return render(request, 'media/list-items.html', context) def media_edit_list(request, model, rel_model_tag, rel_object_id, context): if model == Image: get_media_record = get_image_record else: get_media_record = get_video_record media_list = [get_media_record(media, rel_model_tag, rel_object_id) for media in context['media_list']] if context['star_media']: star_media = get_media_record(star_media, rel_model_tag, rel_object_id) star_media['starred'] = True else: star_media = None json = simplejson.dumps({ 'star_media': star_media, 'media_list': media_list, }) return HttpResponse(json, content_type=json_response_mimetype(request)) # models class Media(models.Model, WebModel): title = models.CharField('title', max_length=128, default='', db_index=True, blank=True) event = models.ForeignKey(Event, null=True, default=None, blank=True) crag = models.ForeignKey(Crag, null=True, default=None, blank=True) members = models.ManyToManyField(Member, blank=True) added_by = models.ForeignKey(Member, related_name='added_images') date_added = models.DateTimeField('date added', auto_now_add=True, null=True, default=None, editable=False) def __unicode__(self): return self.title def get_absolute_url(self): return self.image.url if self.image else self.video.embed_url class Image(Media): image = ProcessedImageField(upload_to='uploads', processors=[ResizeToFit(width=1024, height=1024, upscale=False)], format='JPEG', options={'quality': 75}) thumbnail_1 = ImageSpecField(source='image', processors=[SmartResize(width=178, height=134)], format='JPEG', options={'quality': 75}) thumbnail_2 = ImageSpecField(source='image', #processors=[SmartResize(width=256, height=192)], processors=[ResizeToFit(height=164)], format='JPEG', options={'quality': 75}) class Video(Media): url = models.URLField('url', max_length=256, default='') embed_url = models.URLField('embed url', max_length=256, default='', blank=True) author = models.CharField('author', max_length=64, default='', blank=True) thumbnail = ProcessedImageField(upload_to='uploads', processors=[ResizeToFit(width=1024, height=1024, upscale=False)], format='JPEG', options={'quality': 75}, null=True, default=None, blank=True) thumbnail_1 = ImageSpecField(source='thumbnail', processors=[SmartResize(width=178, height=134)], format='JPEG', options={'quality': 75}) thumbnail_2 = ImageSpecField(source='thumbnail', #processors=[SmartResize(width=256, height=192)], processors=[ResizeToFit(height=164)], format='JPEG', options={'quality': 75}) class Crag(models.Model, WebModel): name = models.CharField('name', max_length=64, default='', db_index=True) normalized_name = models.CharField('normalized name', max_length=64, default='', editable=False) type = models.IntegerField('crag type', null=True, default=None, choices=crag_types) description = models.TextField('description', default='', blank=True) country = models.ForeignKey('country', null=True, default=None) #TODO: make this not null when db enables it latitude = models.FloatField('latitude', null=True, default=None) longitude = models.FloatField('longitude', null=True, default=None) location_index = FixedCharField('location index', length=24, default='', editable=False, db_index=True) # handled by db, used for marker clustering added_by = models.ForeignKey('member', null=True, default=None) #route_count = models.IntegerField('route count', null=True, default=None, editable=False) date_created = models.DateTimeField('date created', auto_now_add=True, null=True, default=None, editable=False) last_modified = models.DateTimeField('last modified', auto_now=True, null=True, default=None, editable=False) star_image = models.OneToOneField('Image', null=True, default=None, related_name='star_crags', on_delete=models.SET_NULL) star_video = models.OneToOneField('Video', null=True, default=None, related_name='star_crags', on_delete=models.SET_NULL)

    Read the article

  • creating objects from trivial graph format text file. java. dijkstra algorithm.

    - by user560084
    i want to create objects, vertex and edge, from trivial graph format txt file. one of programmers here suggested that i use trivial graph format to store data for dijkstra algorithm. the problem is that at the moment all the information, e.g., weight, links, is in the sourcecode. i want to have a separate text file for that and read it into the program. i thought about using a code for scanning through the text file by using scanner. but i am not quite sure how to create different objects from the same file. could i have some help please? the file is v0 Harrisburg v1 Baltimore v2 Washington v3 Philadelphia v4 Binghamton v5 Allentown v6 New York # v0 v1 79.83 v0 v5 81.15 v1 v0 79.75 v1 v2 39.42 v1 v3 103.00 v2 v1 38.65 v3 v1 102.53 v3 v5 61.44 v3 v6 96.79 v4 v5 133.04 v5 v0 81.77 v5 v3 62.05 v5 v4 134.47 v5 v6 91.63 v6 v3 97.24 v6 v5 87.94 and the dijkstra algorithm code is Downloaded from: http://en.literateprograms.org/Special:Downloadcode/Dijkstra%27s_algorithm_%28Java%29 */ import java.util.PriorityQueue; import java.util.List; import java.util.ArrayList; import java.util.Collections; class Vertex implements Comparable<Vertex> { public final String name; public Edge[] adjacencies; public double minDistance = Double.POSITIVE_INFINITY; public Vertex previous; public Vertex(String argName) { name = argName; } public String toString() { return name; } public int compareTo(Vertex other) { return Double.compare(minDistance, other.minDistance); } } class Edge { public final Vertex target; public final double weight; public Edge(Vertex argTarget, double argWeight) { target = argTarget; weight = argWeight; } } public class Dijkstra { public static void computePaths(Vertex source) { source.minDistance = 0.; PriorityQueue<Vertex> vertexQueue = new PriorityQueue<Vertex>(); vertexQueue.add(source); while (!vertexQueue.isEmpty()) { Vertex u = vertexQueue.poll(); // Visit each edge exiting u for (Edge e : u.adjacencies) { Vertex v = e.target; double weight = e.weight; double distanceThroughU = u.minDistance + weight; if (distanceThroughU < v.minDistance) { vertexQueue.remove(v); v.minDistance = distanceThroughU ; v.previous = u; vertexQueue.add(v); } } } } public static List<Vertex> getShortestPathTo(Vertex target) { List<Vertex> path = new ArrayList<Vertex>(); for (Vertex vertex = target; vertex != null; vertex = vertex.previous) path.add(vertex); Collections.reverse(path); return path; } public static void main(String[] args) { Vertex v0 = new Vertex("Nottinghill_Gate"); Vertex v1 = new Vertex("High_Street_kensignton"); Vertex v2 = new Vertex("Glouchester_Road"); Vertex v3 = new Vertex("South_Kensignton"); Vertex v4 = new Vertex("Sloane_Square"); Vertex v5 = new Vertex("Victoria"); Vertex v6 = new Vertex("Westminster"); v0.adjacencies = new Edge[]{new Edge(v1, 79.83), new Edge(v6, 97.24)}; v1.adjacencies = new Edge[]{new Edge(v2, 39.42), new Edge(v0, 79.83)}; v2.adjacencies = new Edge[]{new Edge(v3, 38.65), new Edge(v1, 39.42)}; v3.adjacencies = new Edge[]{new Edge(v4, 102.53), new Edge(v2, 38.65)}; v4.adjacencies = new Edge[]{new Edge(v5, 133.04), new Edge(v3, 102.53)}; v5.adjacencies = new Edge[]{new Edge(v6, 81.77), new Edge(v4, 133.04)}; v6.adjacencies = new Edge[]{new Edge(v0, 97.24), new Edge(v5, 81.77)}; Vertex[] vertices = { v0, v1, v2, v3, v4, v5, v6 }; computePaths(v0); for (Vertex v : vertices) { System.out.println("Distance to " + v + ": " + v.minDistance); List<Vertex> path = getShortestPathTo(v); System.out.println("Path: " + path); } } } and the code for scanning file is import java.util.Scanner; import java.io.File; import java.io.FileNotFoundException; public class DataScanner1 { //private int total = 0; //private int distance = 0; private String vector; private String stations; private double [] Edge = new double []; /*public int getTotal(){ return total; } */ /* public void getMenuInput(){ KeyboardInput in = new KeyboardInput; System.out.println("Enter the destination? "); String val = in.readString(); return val; } */ public void readFile(String fileName) { try { Scanner scanner = new Scanner(new File(fileName)); scanner.useDelimiter (System.getProperty("line.separator")); while (scanner.hasNext()) { parseLine(scanner.next()); } scanner.close(); } catch (FileNotFoundException e) { e.printStackTrace(); } } public void parseLine(String line) { Scanner lineScanner = new Scanner(line); lineScanner.useDelimiter("\\s*,\\s*"); vector = lineScanner.next(); stations = lineScanner.next(); System.out.println("The current station is " + vector + " and the destination to the next station is " + stations + "."); //total += distance; //System.out.println("The total distance is " + total); } public static void main(String[] args) { /* if (args.length != 1) { System.err.println("usage: java TextScanner2" + "file location"); System.exit(0); } */ DataScanner1 scanner = new DataScanner1(); scanner.readFile(args[0]); //int total =+ distance; //System.out.println(""); //System.out.println("The total distance is " + scanner.getTotal()); } }

    Read the article

  • Money vs. Decimal vs. Float Performance issues (SQL data types for Currency value)?

    - by urz shah
    What data type should be selected in case of Currency value column in SQL server. I have read some where on web Working on customer implementations, we found some interesting performance numbers concerning the money data type. For example, when Analysis Services was set to the currency data type (from double) to match the SQL Server money data type, there was a 13% improvement in processing speed (rows/sec). Is it true??

    Read the article

  • Using R to Analyze G1GC Log Files

    - by user12620111
    Using R to Analyze G1GC Log Files body, td { font-family: sans-serif; background-color: white; font-size: 12px; margin: 8px; } tt, code, pre { font-family: 'DejaVu Sans Mono', 'Droid Sans Mono', 'Lucida Console', Consolas, Monaco, monospace; } h1 { font-size:2.2em; } h2 { font-size:1.8em; } h3 { font-size:1.4em; } h4 { font-size:1.0em; } h5 { font-size:0.9em; } h6 { font-size:0.8em; } a:visited { color: rgb(50%, 0%, 50%); } pre { margin-top: 0; max-width: 95%; border: 1px solid #ccc; white-space: pre-wrap; } pre code { display: block; padding: 0.5em; } code.r, code.cpp { background-color: #F8F8F8; } table, td, th { border: none; } blockquote { color:#666666; margin:0; padding-left: 1em; border-left: 0.5em #EEE solid; } hr { height: 0px; border-bottom: none; border-top-width: thin; border-top-style: dotted; border-top-color: #999999; } @media print { * { background: transparent !important; color: black !important; filter:none !important; -ms-filter: none !important; } body { font-size:12pt; max-width:100%; } a, a:visited { text-decoration: underline; } hr { visibility: hidden; page-break-before: always; } pre, blockquote { padding-right: 1em; page-break-inside: avoid; } tr, img { page-break-inside: avoid; } img { max-width: 100% !important; } @page :left { margin: 15mm 20mm 15mm 10mm; } @page :right { margin: 15mm 10mm 15mm 20mm; } p, h2, h3 { orphans: 3; widows: 3; } h2, h3 { page-break-after: avoid; } } pre .operator, pre .paren { color: rgb(104, 118, 135) } pre .literal { color: rgb(88, 72, 246) } pre .number { color: rgb(0, 0, 205); } pre .comment { color: rgb(76, 136, 107); } pre .keyword { color: rgb(0, 0, 255); } pre .identifier { color: rgb(0, 0, 0); } pre .string { color: rgb(3, 106, 7); } var hljs=new function(){function m(p){return p.replace(/&/gm,"&").replace(/"}while(y.length||w.length){var v=u().splice(0,1)[0];z+=m(x.substr(q,v.offset-q));q=v.offset;if(v.event=="start"){z+=t(v.node);s.push(v.node)}else{if(v.event=="stop"){var p,r=s.length;do{r--;p=s[r];z+=("")}while(p!=v.node);s.splice(r,1);while(r'+M[0]+""}else{r+=M[0]}O=P.lR.lastIndex;M=P.lR.exec(L)}return r+L.substr(O,L.length-O)}function J(L,M){if(M.sL&&e[M.sL]){var r=d(M.sL,L);x+=r.keyword_count;return r.value}else{return F(L,M)}}function I(M,r){var L=M.cN?'':"";if(M.rB){y+=L;M.buffer=""}else{if(M.eB){y+=m(r)+L;M.buffer=""}else{y+=L;M.buffer=r}}D.push(M);A+=M.r}function G(N,M,Q){var R=D[D.length-1];if(Q){y+=J(R.buffer+N,R);return false}var P=q(M,R);if(P){y+=J(R.buffer+N,R);I(P,M);return P.rB}var L=v(D.length-1,M);if(L){var O=R.cN?"":"";if(R.rE){y+=J(R.buffer+N,R)+O}else{if(R.eE){y+=J(R.buffer+N,R)+O+m(M)}else{y+=J(R.buffer+N+M,R)+O}}while(L1){O=D[D.length-2].cN?"":"";y+=O;L--;D.length--}var r=D[D.length-1];D.length--;D[D.length-1].buffer="";if(r.starts){I(r.starts,"")}return R.rE}if(w(M,R)){throw"Illegal"}}var E=e[B];var D=[E.dM];var A=0;var x=0;var y="";try{var s,u=0;E.dM.buffer="";do{s=p(C,u);var t=G(s[0],s[1],s[2]);u+=s[0].length;if(!t){u+=s[1].length}}while(!s[2]);if(D.length1){throw"Illegal"}return{r:A,keyword_count:x,value:y}}catch(H){if(H=="Illegal"){return{r:0,keyword_count:0,value:m(C)}}else{throw H}}}function g(t){var p={keyword_count:0,r:0,value:m(t)};var r=p;for(var q in e){if(!e.hasOwnProperty(q)){continue}var s=d(q,t);s.language=q;if(s.keyword_count+s.rr.keyword_count+r.r){r=s}if(s.keyword_count+s.rp.keyword_count+p.r){r=p;p=s}}if(r.language){p.second_best=r}return p}function i(r,q,p){if(q){r=r.replace(/^((]+|\t)+)/gm,function(t,w,v,u){return w.replace(/\t/g,q)})}if(p){r=r.replace(/\n/g,"")}return r}function n(t,w,r){var x=h(t,r);var v=a(t);var y,s;if(v){y=d(v,x)}else{return}var q=c(t);if(q.length){s=document.createElement("pre");s.innerHTML=y.value;y.value=k(q,c(s),x)}y.value=i(y.value,w,r);var u=t.className;if(!u.match("(\\s|^)(language-)?"+v+"(\\s|$)")){u=u?(u+" "+v):v}if(/MSIE [678]/.test(navigator.userAgent)&&t.tagName=="CODE"&&t.parentNode.tagName=="PRE"){s=t.parentNode;var p=document.createElement("div");p.innerHTML=""+y.value+"";t=p.firstChild.firstChild;p.firstChild.cN=s.cN;s.parentNode.replaceChild(p.firstChild,s)}else{t.innerHTML=y.value}t.className=u;t.result={language:v,kw:y.keyword_count,re:y.r};if(y.second_best){t.second_best={language:y.second_best.language,kw:y.second_best.keyword_count,re:y.second_best.r}}}function o(){if(o.called){return}o.called=true;var r=document.getElementsByTagName("pre");for(var p=0;p|=||=||=|\\?|\\[|\\{|\\(|\\^|\\^=|\\||\\|=|\\|\\||~";this.ER="(?![\\s\\S])";this.BE={b:"\\\\.",r:0};this.ASM={cN:"string",b:"'",e:"'",i:"\\n",c:[this.BE],r:0};this.QSM={cN:"string",b:'"',e:'"',i:"\\n",c:[this.BE],r:0};this.CLCM={cN:"comment",b:"//",e:"$"};this.CBLCLM={cN:"comment",b:"/\\*",e:"\\*/"};this.HCM={cN:"comment",b:"#",e:"$"};this.NM={cN:"number",b:this.NR,r:0};this.CNM={cN:"number",b:this.CNR,r:0};this.BNM={cN:"number",b:this.BNR,r:0};this.inherit=function(r,s){var p={};for(var q in r){p[q]=r[q]}if(s){for(var q in s){p[q]=s[q]}}return p}}();hljs.LANGUAGES.cpp=function(){var a={keyword:{"false":1,"int":1,"float":1,"while":1,"private":1,"char":1,"catch":1,"export":1,virtual:1,operator:2,sizeof:2,dynamic_cast:2,typedef:2,const_cast:2,"const":1,struct:1,"for":1,static_cast:2,union:1,namespace:1,unsigned:1,"long":1,"throw":1,"volatile":2,"static":1,"protected":1,bool:1,template:1,mutable:1,"if":1,"public":1,friend:2,"do":1,"return":1,"goto":1,auto:1,"void":2,"enum":1,"else":1,"break":1,"new":1,extern:1,using:1,"true":1,"class":1,asm:1,"case":1,typeid:1,"short":1,reinterpret_cast:2,"default":1,"double":1,register:1,explicit:1,signed:1,typename:1,"try":1,"this":1,"switch":1,"continue":1,wchar_t:1,inline:1,"delete":1,alignof:1,char16_t:1,char32_t:1,constexpr:1,decltype:1,noexcept:1,nullptr:1,static_assert:1,thread_local:1,restrict:1,_Bool:1,complex:1},built_in:{std:1,string:1,cin:1,cout:1,cerr:1,clog:1,stringstream:1,istringstream:1,ostringstream:1,auto_ptr:1,deque:1,list:1,queue:1,stack:1,vector:1,map:1,set:1,bitset:1,multiset:1,multimap:1,unordered_set:1,unordered_map:1,unordered_multiset:1,unordered_multimap:1,array:1,shared_ptr:1}};return{dM:{k:a,i:"",k:a,r:10,c:["self"]}]}}}();hljs.LANGUAGES.r={dM:{c:[hljs.HCM,{cN:"number",b:"\\b0[xX][0-9a-fA-F]+[Li]?\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\b\\d+(?:[eE][+\\-]?\\d*)?L\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\b\\d+\\.(?!\\d)(?:i\\b)?",e:hljs.IMMEDIATE_RE,r:1},{cN:"number",b:"\\b\\d+(?:\\.\\d*)?(?:[eE][+\\-]?\\d*)?i?\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\.\\d+(?:[eE][+\\-]?\\d*)?i?\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"keyword",b:"(?:tryCatch|library|setGeneric|setGroupGeneric)\\b",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\.\\.\\.",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\.\\.\\d+(?![\\w.])",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\b(?:function)",e:hljs.IMMEDIATE_RE,r:2},{cN:"keyword",b:"(?:if|in|break|next|repeat|else|for|return|switch|while|try|stop|warning|require|attach|detach|source|setMethod|setClass)\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"literal",b:"(?:NA|NA_integer_|NA_real_|NA_character_|NA_complex_)\\b",e:hljs.IMMEDIATE_RE,r:10},{cN:"literal",b:"(?:NULL|TRUE|FALSE|T|F|Inf|NaN)\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"identifier",b:"[a-zA-Z.][a-zA-Z0-9._]*\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"operator",b:"|=||   Using R to Analyze G1GC Log Files   Using R to Analyze G1GC Log Files Introduction Working in Oracle Platform Integration gives an engineer opportunities to work on a wide array of technologies. My team’s goal is to make Oracle applications run best on the Solaris/SPARC platform. When looking for bottlenecks in a modern applications, one needs to be aware of not only how the CPUs and operating system are executing, but also network, storage, and in some cases, the Java Virtual Machine. I was recently presented with about 1.5 GB of Java Garbage First Garbage Collector log file data. If you’re not familiar with the subject, you might want to review Garbage First Garbage Collector Tuning by Monica Beckwith. The customer had been running Java HotSpot 1.6.0_31 to host a web application server. I was told that the Solaris/SPARC server was running a Java process launched using a commmand line that included the following flags: -d64 -Xms9g -Xmx9g -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:InitiatingHeapOccupancyPercent=80 -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+PrintGC -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps -XX:+PrintFlagsFinal -XX:+DisableExplicitGC -XX:+UnlockExperimentalVMOptions -XX:ParallelGCThreads=8 Several sources on the internet indicate that if I were to print out the 1.5 GB of log files, it would require enough paper to fill the bed of a pick up truck. Of course, it would be fruitless to try to scan the log files by hand. Tools will be required to summarize the contents of the log files. Others have encountered large Java garbage collection log files. There are existing tools to analyze the log files: IBM’s GC toolkit The chewiebug GCViewer gchisto HPjmeter Instead of using one of the other tools listed, I decide to parse the log files with standard Unix tools, and analyze the data with R. Data Cleansing The log files arrived in two different formats. I guess that the difference is that one set of log files was generated using a more verbose option, maybe -XX:+PrintHeapAtGC, and the other set of log files was generated without that option. Format 1 In some of the log files, the log files with the less verbose format, a single trace, i.e. the report of a singe garbage collection event, looks like this: {Heap before GC invocations=12280 (full 61): garbage-first heap total 9437184K, used 7499918K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) region size 4096K, 1 young (4096K), 0 survivors (0K) compacting perm gen total 262144K, used 144077K [0xffffffff40000000, 0xffffffff50000000, 0xffffffff50000000) the space 262144K, 54% used [0xffffffff40000000, 0xffffffff48cb3758, 0xffffffff48cb3800, 0xffffffff50000000) No shared spaces configured. 2014-05-14T07:24:00.988-0700: 60586.353: [GC pause (young) 7324M->7320M(9216M), 0.1567265 secs] Heap after GC invocations=12281 (full 61): garbage-first heap total 9437184K, used 7496533K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) region size 4096K, 0 young (0K), 0 survivors (0K) compacting perm gen total 262144K, used 144077K [0xffffffff40000000, 0xffffffff50000000, 0xffffffff50000000) the space 262144K, 54% used [0xffffffff40000000, 0xffffffff48cb3758, 0xffffffff48cb3800, 0xffffffff50000000) No shared spaces configured. } A simple grep can be used to extract a summary: $ grep "\[ GC pause (young" g1gc.log 2014-05-13T13:24:35.091-0700: 3.109: [GC pause (young) 20M->5029K(9216M), 0.0146328 secs] 2014-05-13T13:24:35.440-0700: 3.459: [GC pause (young) 9125K->6077K(9216M), 0.0086723 secs] 2014-05-13T13:24:37.581-0700: 5.599: [GC pause (young) 25M->8470K(9216M), 0.0203820 secs] 2014-05-13T13:24:42.686-0700: 10.704: [GC pause (young) 44M->15M(9216M), 0.0288848 secs] 2014-05-13T13:24:48.941-0700: 16.958: [GC pause (young) 51M->20M(9216M), 0.0491244 secs] 2014-05-13T13:24:56.049-0700: 24.066: [GC pause (young) 92M->26M(9216M), 0.0525368 secs] 2014-05-13T13:25:34.368-0700: 62.383: [GC pause (young) 602M->68M(9216M), 0.1721173 secs] But that format wasn't easily read into R, so I needed to be a bit more tricky. I used the following Unix command to create a summary file that was easy for R to read. $ echo "SecondsSinceLaunch BeforeSize AfterSize TotalSize RealTime" $ grep "\[GC pause (young" g1gc.log | grep -v mark | sed -e 's/[A-SU-z\(\),]/ /g' -e 's/->/ /' -e 's/: / /g' | more SecondsSinceLaunch BeforeSize AfterSize TotalSize RealTime 2014-05-13T13:24:35.091-0700 3.109 20 5029 9216 0.0146328 2014-05-13T13:24:35.440-0700 3.459 9125 6077 9216 0.0086723 2014-05-13T13:24:37.581-0700 5.599 25 8470 9216 0.0203820 2014-05-13T13:24:42.686-0700 10.704 44 15 9216 0.0288848 2014-05-13T13:24:48.941-0700 16.958 51 20 9216 0.0491244 2014-05-13T13:24:56.049-0700 24.066 92 26 9216 0.0525368 2014-05-13T13:25:34.368-0700 62.383 602 68 9216 0.1721173 Format 2 In some of the log files, the log files with the more verbose format, a single trace, i.e. the report of a singe garbage collection event, was more complicated than Format 1. Here is a text file with an example of a single G1GC trace in the second format. As you can see, it is quite complicated. It is nice that there is so much information available, but the level of detail can be overwhelming. I wrote this awk script (download) to summarize each trace on a single line. #!/usr/bin/env awk -f BEGIN { printf("SecondsSinceLaunch IncrementalCount FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize\n") } ###################### # Save count data from lines that are at the start of each G1GC trace. # Each trace starts out like this: # {Heap before GC invocations=14 (full 0): # garbage-first heap total 9437184K, used 325496K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) ###################### /{Heap.*full/{ gsub ( "\\)" , "" ); nf=split($0,a,"="); split(a[2],b," "); getline; if ( match($0, "first") ) { G1GC=1; IncrementalCount=b[1]; FullCount=substr( b[3], 1, length(b[3])-1 ); } else { G1GC=0; } } ###################### # Pull out time stamps that are in lines with this format: # 2014-05-12T14:02:06.025-0700: 94.312: [GC pause (young), 0.08870154 secs] ###################### /GC pause/ { DateTime=$1; SecondsSinceLaunch=substr($2, 1, length($2)-1); } ###################### # Heap sizes are in lines that look like this: # [ 4842M->4838M(9216M)] ###################### /\[ .*]$/ { gsub ( "\\[" , "" ); gsub ( "\ \]" , "" ); gsub ( "->" , " " ); gsub ( "\\( " , " " ); gsub ( "\ \)" , " " ); split($0,a," "); if ( split(a[1],b,"M") > 1 ) {BeforeSize=b[1]*1024;} if ( split(a[1],b,"K") > 1 ) {BeforeSize=b[1];} if ( split(a[2],b,"M") > 1 ) {AfterSize=b[1]*1024;} if ( split(a[2],b,"K") > 1 ) {AfterSize=b[1];} if ( split(a[3],b,"M") > 1 ) {TotalSize=b[1]*1024;} if ( split(a[3],b,"K") > 1 ) {TotalSize=b[1];} } ###################### # Emit an output line when you find input that looks like this: # [Times: user=1.41 sys=0.08, real=0.24 secs] ###################### /\[Times/ { if (G1GC==1) { gsub ( "," , "" ); split($2,a,"="); UserTime=a[2]; split($3,a,"="); SysTime=a[2]; split($4,a,"="); RealTime=a[2]; print DateTime,SecondsSinceLaunch,IncrementalCount,FullCount,UserTime,SysTime,RealTime,BeforeSize,AfterSize,TotalSize; G1GC=0; } } The resulting summary is about 25X smaller that the original file, but still difficult for a human to digest. SecondsSinceLaunch IncrementalCount FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ... 2014-05-12T18:36:34.669-0700: 3985.744 561 0 0.57 0.06 0.16 1724416 1720320 9437184 2014-05-12T18:36:34.839-0700: 3985.914 562 0 0.51 0.06 0.19 1724416 1720320 9437184 2014-05-12T18:36:35.069-0700: 3986.144 563 0 0.60 0.04 0.27 1724416 1721344 9437184 2014-05-12T18:36:35.354-0700: 3986.429 564 0 0.33 0.04 0.09 1725440 1722368 9437184 2014-05-12T18:36:35.545-0700: 3986.620 565 0 0.58 0.04 0.17 1726464 1722368 9437184 2014-05-12T18:36:35.726-0700: 3986.801 566 0 0.43 0.05 0.12 1726464 1722368 9437184 2014-05-12T18:36:35.856-0700: 3986.930 567 0 0.30 0.04 0.07 1726464 1723392 9437184 2014-05-12T18:36:35.947-0700: 3987.023 568 0 0.61 0.04 0.26 1727488 1723392 9437184 2014-05-12T18:36:36.228-0700: 3987.302 569 0 0.46 0.04 0.16 1731584 1724416 9437184 Reading the Data into R Once the GC log data had been cleansed, either by processing the first format with the shell script, or by processing the second format with the awk script, it was easy to read the data into R. g1gc.df = read.csv("summary.txt", row.names = NULL, stringsAsFactors=FALSE,sep="") str(g1gc.df) ## 'data.frame': 8307 obs. of 10 variables: ## $ row.names : chr "2014-05-12T14:00:32.868-0700:" "2014-05-12T14:00:33.179-0700:" "2014-05-12T14:00:33.677-0700:" "2014-05-12T14:00:35.538-0700:" ... ## $ SecondsSinceLaunch: num 1.16 1.47 1.97 3.83 6.1 ... ## $ IncrementalCount : int 0 1 2 3 4 5 6 7 8 9 ... ## $ FullCount : int 0 0 0 0 0 0 0 0 0 0 ... ## $ UserTime : num 0.11 0.05 0.04 0.21 0.08 0.26 0.31 0.33 0.34 0.56 ... ## $ SysTime : num 0.04 0.01 0.01 0.05 0.01 0.06 0.07 0.06 0.07 0.09 ... ## $ RealTime : num 0.02 0.02 0.01 0.04 0.02 0.04 0.05 0.04 0.04 0.06 ... ## $ BeforeSize : int 8192 5496 5768 22528 24576 43008 34816 53248 55296 93184 ... ## $ AfterSize : int 1400 1672 2557 4907 7072 14336 16384 18432 19456 21504 ... ## $ TotalSize : int 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 ... head(g1gc.df) ## row.names SecondsSinceLaunch IncrementalCount ## 1 2014-05-12T14:00:32.868-0700: 1.161 0 ## 2 2014-05-12T14:00:33.179-0700: 1.472 1 ## 3 2014-05-12T14:00:33.677-0700: 1.969 2 ## 4 2014-05-12T14:00:35.538-0700: 3.830 3 ## 5 2014-05-12T14:00:37.811-0700: 6.103 4 ## 6 2014-05-12T14:00:41.428-0700: 9.720 5 ## FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ## 1 0 0.11 0.04 0.02 8192 1400 9437184 ## 2 0 0.05 0.01 0.02 5496 1672 9437184 ## 3 0 0.04 0.01 0.01 5768 2557 9437184 ## 4 0 0.21 0.05 0.04 22528 4907 9437184 ## 5 0 0.08 0.01 0.02 24576 7072 9437184 ## 6 0 0.26 0.06 0.04 43008 14336 9437184 Basic Statistics Once the data has been read into R, simple statistics are very easy to generate. All of the numbers from high school statistics are available via simple commands. For example, generate a summary of every column: summary(g1gc.df) ## row.names SecondsSinceLaunch IncrementalCount FullCount ## Length:8307 Min. : 1 Min. : 0 Min. : 0.0 ## Class :character 1st Qu.: 9977 1st Qu.:2048 1st Qu.: 0.0 ## Mode :character Median :12855 Median :4136 Median : 12.0 ## Mean :12527 Mean :4156 Mean : 31.6 ## 3rd Qu.:15758 3rd Qu.:6262 3rd Qu.: 61.0 ## Max. :55484 Max. :8391 Max. :113.0 ## UserTime SysTime RealTime BeforeSize ## Min. :0.040 Min. :0.0000 Min. : 0.0 Min. : 5476 ## 1st Qu.:0.470 1st Qu.:0.0300 1st Qu.: 0.1 1st Qu.:5137920 ## Median :0.620 Median :0.0300 Median : 0.1 Median :6574080 ## Mean :0.751 Mean :0.0355 Mean : 0.3 Mean :5841855 ## 3rd Qu.:0.920 3rd Qu.:0.0400 3rd Qu.: 0.2 3rd Qu.:7084032 ## Max. :3.370 Max. :1.5600 Max. :488.1 Max. :8696832 ## AfterSize TotalSize ## Min. : 1380 Min. :9437184 ## 1st Qu.:5002752 1st Qu.:9437184 ## Median :6559744 Median :9437184 ## Mean :5785454 Mean :9437184 ## 3rd Qu.:7054336 3rd Qu.:9437184 ## Max. :8482816 Max. :9437184 Q: What is the total amount of User CPU time spent in garbage collection? sum(g1gc.df$UserTime) ## [1] 6236 As you can see, less than two hours of CPU time was spent in garbage collection. Is that too much? To find the percentage of time spent in garbage collection, divide the number above by total_elapsed_time*CPU_count. In this case, there are a lot of CPU’s and it turns out the the overall amount of CPU time spent in garbage collection isn’t a problem when viewed in isolation. When calculating rates, i.e. events per unit time, you need to ask yourself if the rate is homogenous across the time period in the log file. Does the log file include spikes of high activity that should be separately analyzed? Averaging in data from nights and weekends with data from business hours may alias problems. If you have a reason to suspect that the garbage collection rates include peaks and valleys that need independent analysis, see the “Time Series” section, below. Q: How much garbage is collected on each pass? The amount of heap space that is recovered per GC pass is surprisingly low: At least one collection didn’t recover any data. (“Min.=0”) 25% of the passes recovered 3MB or less. (“1st Qu.=3072”) Half of the GC passes recovered 4MB or less. (“Median=4096”) The average amount recovered was 56MB. (“Mean=56390”) 75% of the passes recovered 36MB or less. (“3rd Qu.=36860”) At least one pass recovered 2GB. (“Max.=2121000”) g1gc.df$Delta = g1gc.df$BeforeSize - g1gc.df$AfterSize summary(g1gc.df$Delta) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0 3070 4100 56400 36900 2120000 Q: What is the maximum User CPU time for a single collection? The worst garbage collection (“Max.”) is many standard deviations away from the mean. The data appears to be right skewed. summary(g1gc.df$UserTime) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0.040 0.470 0.620 0.751 0.920 3.370 sd(g1gc.df$UserTime) ## [1] 0.3966 Basic Graphics Once the data is in R, it is trivial to plot the data with formats including dot plots, line charts, bar charts (simple, stacked, grouped), pie charts, boxplots, scatter plots histograms, and kernel density plots. Histogram of User CPU Time per Collection I don't think that this graph requires any explanation. hist(g1gc.df$UserTime, main="User CPU Time per Collection", xlab="Seconds", ylab="Frequency") Box plot to identify outliers When the initial data is viewed with a box plot, you can see the one crazy outlier in the real time per GC. Save this data point for future analysis and drop the outlier so that it’s not throwing off our statistics. Now the box plot shows many outliers, which will be examined later, using times series analysis. Notice that the scale of the x-axis changes drastically once the crazy outlier is removed. par(mfrow=c(2,1)) boxplot(g1gc.df$UserTime,g1gc.df$SysTime,g1gc.df$RealTime, main="Box Plot of Time per GC\n(dominated by a crazy outlier)", names=c("usr","sys","elapsed"), xlab="Seconds per GC", ylab="Time (Seconds)", horizontal = TRUE, outcol="red") crazy.outlier.df=g1gc.df[g1gc.df$RealTime > 400,] g1gc.df=g1gc.df[g1gc.df$RealTime < 400,] boxplot(g1gc.df$UserTime,g1gc.df$SysTime,g1gc.df$RealTime, main="Box Plot of Time per GC\n(crazy outlier excluded)", names=c("usr","sys","elapsed"), xlab="Seconds per GC", ylab="Time (Seconds)", horizontal = TRUE, outcol="red") box(which = "outer", lty = "solid") Here is the crazy outlier for future analysis: crazy.outlier.df ## row.names SecondsSinceLaunch IncrementalCount ## 8233 2014-05-12T23:15:43.903-0700: 20741 8316 ## FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ## 8233 112 0.55 0.42 488.1 8381440 8235008 9437184 ## Delta ## 8233 146432 R Time Series Data To analyze the garbage collection as a time series, I’ll use Z’s Ordered Observations (zoo). “zoo is the creator for an S3 class of indexed totally ordered observations which includes irregular time series.” require(zoo) ## Loading required package: zoo ## ## Attaching package: 'zoo' ## ## The following objects are masked from 'package:base': ## ## as.Date, as.Date.numeric head(g1gc.df[,1]) ## [1] "2014-05-12T14:00:32.868-0700:" "2014-05-12T14:00:33.179-0700:" ## [3] "2014-05-12T14:00:33.677-0700:" "2014-05-12T14:00:35.538-0700:" ## [5] "2014-05-12T14:00:37.811-0700:" "2014-05-12T14:00:41.428-0700:" options("digits.secs"=3) times=as.POSIXct( g1gc.df[,1], format="%Y-%m-%dT%H:%M:%OS%z:") g1gc.z = zoo(g1gc.df[,-c(1)], order.by=times) head(g1gc.z) ## SecondsSinceLaunch IncrementalCount FullCount ## 2014-05-12 17:00:32.868 1.161 0 0 ## 2014-05-12 17:00:33.178 1.472 1 0 ## 2014-05-12 17:00:33.677 1.969 2 0 ## 2014-05-12 17:00:35.538 3.830 3 0 ## 2014-05-12 17:00:37.811 6.103 4 0 ## 2014-05-12 17:00:41.427 9.720 5 0 ## UserTime SysTime RealTime BeforeSize AfterSize ## 2014-05-12 17:00:32.868 0.11 0.04 0.02 8192 1400 ## 2014-05-12 17:00:33.178 0.05 0.01 0.02 5496 1672 ## 2014-05-12 17:00:33.677 0.04 0.01 0.01 5768 2557 ## 2014-05-12 17:00:35.538 0.21 0.05 0.04 22528 4907 ## 2014-05-12 17:00:37.811 0.08 0.01 0.02 24576 7072 ## 2014-05-12 17:00:41.427 0.26 0.06 0.04 43008 14336 ## TotalSize Delta ## 2014-05-12 17:00:32.868 9437184 6792 ## 2014-05-12 17:00:33.178 9437184 3824 ## 2014-05-12 17:00:33.677 9437184 3211 ## 2014-05-12 17:00:35.538 9437184 17621 ## 2014-05-12 17:00:37.811 9437184 17504 ## 2014-05-12 17:00:41.427 9437184 28672 Example of Two Benchmark Runs in One Log File The data in the following graph is from a different log file, not the one of primary interest to this article. I’m including this image because it is an example of idle periods followed by busy periods. It would be uninteresting to average the rate of garbage collection over the entire log file period. More interesting would be the rate of garbage collect in the two busy periods. Are they the same or different? Your production data may be similar, for example, bursts when employees return from lunch and idle times on weekend evenings, etc. Once the data is in an R Time Series, you can analyze isolated time windows. Clipping the Time Series data Flashing back to our test case… Viewing the data as a time series is interesting. You can see that the work intensive time period is between 9:00 PM and 3:00 AM. Lets clip the data to the interesting period:     par(mfrow=c(2,1)) plot(g1gc.z$UserTime, type="h", main="User Time per GC\nTime: Complete Log File", xlab="Time of Day", ylab="CPU Seconds per GC", col="#1b9e77") clipped.g1gc.z=window(g1gc.z, start=as.POSIXct("2014-05-12 21:00:00"), end=as.POSIXct("2014-05-13 03:00:00")) plot(clipped.g1gc.z$UserTime, type="h", main="User Time per GC\nTime: Limited to Benchmark Execution", xlab="Time of Day", ylab="CPU Seconds per GC", col="#1b9e77") box(which = "outer", lty = "solid") Cumulative Incremental and Full GC count Here is the cumulative incremental and full GC count. When the line is very steep, it indicates that the GCs are repeating very quickly. Notice that the scale on the Y axis is different for full vs. incremental. plot(clipped.g1gc.z[,c(2:3)], main="Cumulative Incremental and Full GC count", xlab="Time of Day", col="#1b9e77") GC Analysis of Benchmark Execution using Time Series data In the following series of 3 graphs: The “After Size” show the amount of heap space in use after each garbage collection. Many Java objects are still referenced, i.e. alive, during each garbage collection. This may indicate that the application has a memory leak, or may indicate that the application has a very large memory footprint. Typically, an application's memory footprint plateau's in the early stage of execution. One would expect this graph to have a flat top. The steep decline in the heap space may indicate that the application crashed after 2:00. The second graph shows that the outliers in real execution time, discussed above, occur near 2:00. when the Java heap seems to be quite full. The third graph shows that Full GCs are infrequent during the first few hours of execution. The rate of Full GC's, (the slope of the cummulative Full GC line), changes near midnight.   plot(clipped.g1gc.z[,c("AfterSize","RealTime","FullCount")], xlab="Time of Day", col=c("#1b9e77","red","#1b9e77")) GC Analysis of heap recovered Each GC trace includes the amount of heap space in use before and after the individual GC event. During garbage coolection, unreferenced objects are identified, the space holding the unreferenced objects is freed, and thus, the difference in before and after usage indicates how much space has been freed. The following box plot and bar chart both demonstrate the same point - the amount of heap space freed per garbage colloection is surprisingly low. par(mfrow=c(2,1)) boxplot(as.vector(clipped.g1gc.z$Delta), main="Amount of Heap Recovered per GC Pass", xlab="Size in KB", horizontal = TRUE, col="red") hist(as.vector(clipped.g1gc.z$Delta), main="Amount of Heap Recovered per GC Pass", xlab="Size in KB", breaks=100, col="red") box(which = "outer", lty = "solid") This graph is the most interesting. The dark blue area shows how much heap is occupied by referenced Java objects. This represents memory that holds live data. The red fringe at the top shows how much data was recovered after each garbage collection. barplot(clipped.g1gc.z[,c("AfterSize","Delta")], col=c("#7570b3","#e7298a"), xlab="Time of Day", border=NA) legend("topleft", c("Live Objects","Heap Recovered on GC"), fill=c("#7570b3","#e7298a")) box(which = "outer", lty = "solid") When I discuss the data in the log files with the customer, I will ask for an explaination for the large amount of referenced data resident in the Java heap. There are two are posibilities: There is a memory leak and the amount of space required to hold referenced objects will continue to grow, limited only by the maximum heap size. After the maximum heap size is reached, the JVM will throw an “Out of Memory” exception every time that the application tries to allocate a new object. If this is the case, the aplication needs to be debugged to identify why old objects are referenced when they are no longer needed. The application has a legitimate requirement to keep a large amount of data in memory. The customer may want to further increase the maximum heap size. Another possible solution would be to partition the application across multiple cluster nodes, where each node has responsibility for managing a unique subset of the data. Conclusion In conclusion, R is a very powerful tool for the analysis of Java garbage collection log files. The primary difficulty is data cleansing so that information can be read into an R data frame. Once the data has been read into R, a rich set of tools may be used for thorough evaluation.

    Read the article

  • How to make facebox popup remain open and the content inside the facebox changes after the submit

    - by Leonardo Dario Perna
    Hi, I'm a jQuery total n00b. In my rails app this what happen: I'm on the homepage, I click this link: <a href='/betas/new' rel='facebox'>Sign up</a> A beautiful facebox popup shows up and render this views and the containing form: # /app/views/invites/new <% form_tag({ :controller => 'registration_code', :action => 'create' }, :id => 'codeForm') do %> <%= text_field_tag :code %> <br /> <%= submit_tag 'Confirm' %> <% end %> I clink on submit and if the code is valid the user is taken on another page in another controller: def create # some stuff redirect_to :controller => 'users', :action => 'type' end Now I would like to render that page INSIDE the SAME popup contains the form, after the submit button is pressed but I have NO IDEA how to do it. I've tried FaceboxRender but this happens: Original version: # /controllers/users_controller def type end If I change it like that nothing happens: # /controllers/users_controller def type respond_to do |format| format.html format.js { render_to_facebox } end end If I change it like that (I know is wrong but I'm a n00b so it's ok :-): # /controllers/users_controller def type respond_to do |format| format.html { render_to_facebox } format.js end end I got this rendered: try { jQuery.facebox("my raw HTML from users/type.html.erb substituted here")'); throw e } Any solutions? THANK YOU SO MUCH!!

    Read the article

  • Manipulating columns of numbers in elisp

    - by ~unutbu
    I have text files with tables like this: Investment advisory and related fees receivable (161,570 ) (71,739 ) (73,135 ) Net purchases of trading investments (93,261 ) (30,701 ) (11,018 ) Other receivables 61,216 (10,352 ) (69,313 ) Restricted cash 20,658 (20,658 ) - Other current assets (39,643 ) 14,752 64 Other non-current assets 71,896 (26,639 ) (26,330 ) Since these are accounting numbers, parenthesized numbers indicate negative numbers. Dashes represent 0 or no number. I'd like to be able to mark a rectangular region such as third column above, call a function (format-column), and automatically have (-73135-11018-69313+64-26330)/1000 sitting in my kill-ring. Even better would be -73.135-11.018-69.313+0.064-26.330 but I couldn't figure out a way to transform 64 -- 0.064. This is what I've come up with: (defun format-column () "format accounting numbers in a rectangular column. format-column puts the result in the kill-ring" (interactive) (let ((p (point)) (m (mark)) ) (copy-rectangle-to-register 0 (min m p) (max m p) nil) (with-temp-buffer (insert-register 0) (goto-char (point-min)) (while (search-forward "-" nil t) (replace-match "" nil t)) (goto-char (point-min)) (while (search-forward "," nil t) (replace-match "" nil t)) (goto-char (point-min)) (while (search-forward ")" nil t) (replace-match "" nil t)) (goto-char (point-min)) (while (search-forward "(" nil t) (replace-match "-" nil t) (just-one-space) (delete-backward-char 1) ) (goto-char (point-min)) (while (search-forward "\n" nil t) (replace-match " " nil t)) (goto-char (point-min)) (kill-new (mapconcat 'identity (split-string (buffer-substring (point-min) (point-max))) "+")) (kill-region (point-min) (point-max)) (insert "(") (yank 2) (goto-char (point-min)) (while (search-forward "+-" nil t) (replace-match "-" nil t)) (goto-char (point-max)) (insert ")/1000") (kill-region (point-min) (point-max)) ) ) ) (global-set-key "\C-c\C-f" 'format-column) Although it seems to work, I'm sure this function is poorly coded. The repetitive calls to goto-char, search-forward, and replace-match and the switching from buffer to string and back to buffer seems ugly and inelegant. My entire approach may be wrong-headed, but I don't know enough elisp to make this more beautiful. Do you see a better way to write format-column, and/or could you make suggestions on how to improve this code?

    Read the article

  • How can I stop rails validating xml?

    - by Andrei T. Ursan
    I'm submitting to a rails webservice the following message: xmlPostData = "<message> <message-text>" + MESSAGE_WITH_XML + "</message-text> <name>" + subject + "</name> <f1>" + toPhone + "</f1> <f2>" + fromPhone + "</f2> </message>"; The problem is the the field with contain a text with XML data, is a workaround but I need to be able to submit that xml to the db and get it from there. Can I stop rails validating and replacing my xml in json format? this is how it looks: --- !map:HashWithIndifferentAccess smil: !map:HashWithIndifferentAccess head: !map:HashWithIndifferentAccess layout: !map:HashWithIndifferentAccess root_layout: !map:HashWithIndifferentAccess height: &quot;600&quot; background_color: white width: &quot;800&quot; type: text/smil-basic-layout body: !map:HashWithIndifferentAccess par: !map:HashWithIndifferentAccess text: !map:HashWithIndifferentAccess left: &quot;33&quot; begin: &quot;33&quot; dur: &quot;33&quot; val: 34343434343434343aaaaaaa height: &quot;33&quot; width: &quot;33&quot; top: &quot;33&quot; And this is the ruby method from the rails webservice: # POST /messages # POST /messages.xml def create @message = Message.new(params[:message]) respond_to do |format| if @message.save flash[:notice] = 'Message was successfully created.' format.html { redirect_to(@message) } format.xml { render :xml => @message, :status => :created, :location => @message } else format.html { render :action => "new" } format.xml { render :xml => @message.errors, :status => :unprocessable_entity } end end end Is a workaround but for the moment this has to work ...

    Read the article

  • I have having following warning in gcc compilation in 32 bit architecture but not having any such wa

    - by thetna
    symbol.c: In function 'symbol_FPrint': symbol.c:1209: warning: format '%ld' expects type 'long int', but argument 3 has type 'SYMBOL' symbol.c: In function 'symbol_FPrintOtter': symbol.c:1236: warning: format '%ld' expects type 'long int', but argument 3 has type 'SYMBOL' symbol.c:1239: warning: format '%ld' expects type 'long int', but argument 3 has type 'SYMBOL' symbol.c:1243: warning: format '%ld' expects type 'long int', but argument 3 has type 'SYMBOL' symbol.c:1266: warning: format '%ld' expects type 'long int', but argument 3 has type 'SYMBOL' In symbol.c 1198 #ifdef CHECK 1199 else { 1200 misc_StartErrorReport(); 1201 misc_ErrorReport("\n In symbol_FPrint: Cannot print symbol.\n"); 1202 misc_FinishErrorReport(); 1203 } 1204 #endif 1205 } 1206 else if (symbol_SignatureExists()) 1207 fputs(symbol_Name(Symbol), File); 1208 else 1209 fprintf(File, "%ld", Symbol); 1210 } And SYMBOL is defined as: typedef size_t SYMBOL When i replaced '%ld' with '%zu' , i got the following warning: symbol.c: In function 'symbol_FPrint': symbol.c:1209: warning: ISO C90 does not support the 'z' printf length modifier Note: From here it has been edited on 26th of march 2010 and and following problem has beeen added because of its similarity to the above mentioned problem. I have following statement: printf("\n\t %4d:%4d:%4d:%4d:%4d:%s:%d", Index, S->info, S->weight, Precedence[Index],S->props,S->name, S->length); The warning I get while compiling in 64 bit architecture is : format ‘%4d’ expects type ‘int’, but argument 5 has type ‘size_t’ here are the definitions of parameter: NAT props; typedef unsigned int NAT; How can i get rid of this so that i can compile without warning in 32 and 64 bit architecture? What can be its solution?

    Read the article

  • Could not load file or assembly 'GMap.NET.Core' or one of its dependencies. An attempt was made to load a program with an incorrect format.

    - by Sam M
    I have a wcf Service application in VS2010.My local machine is a 32 bit OS where as the server is a 64 bit. There are around 6 services in my solution. Im successfully able to host the application on IIS on my local machine.And it works fine. But when i try host that service application on Server i gets the below error Could not load file or assembly 'GMap.NET.Core' or one of its dependencies. An attempt was made to load a program with an incorrect format. I do have reference added in my solution for GMap.NET.Core . I have tried to set the properties in my solution to Any CPU . Also in the application pool i have set the Enable 32-Bit Application to True. i have also set the Copy Local to TRUE in my solution before publishing. When i run the source on through my solution i dont get any error and the solution is built successfully. What else can i try to get my services successfully hosted on the Server and should be accessed through my application.

    Read the article

  • C# exception when calling stored procedure: ORA-01460 - unimplemented or unreasonable conversion req

    - by Taylor L
    I'm trying to call a stored procedure using ADO .NET and I'm getting the following error: ORA-01460 - unimplemented or unreasonable conversion requested The stored procedure I'm trying to call has the following parameters: param1 IN VARCHAR2, param2 IN NUMBER, param3 IN VARCHAR2, param4 OUT NUMBER, param5 OUT NUMBER, param6 OUT NUMBER, param7 OUT VARCHAR2 Below is the C# code I'm using to call the stored procedure: OracleCommand command = connection.CreateCommand(); command.CommandType = CommandType.StoredProcedure; command.CommandText = "MY_PROC"; OracleParameter param1 = new OracleParameter() { ParameterName = "param1", Direction = ParameterDirection.Input, Value = p1, OracleDbType = OracleDbType.Varchar2, Size = p1.Length }; OracleParameter param2 = new OracleParameter() { ParameterName = "param2", Direction = ParameterDirection.Input, Value = p2, OracleDbType = OracleDbType.Decimal }; OracleParameter param3 = new OracleParameter() { ParameterName = "param3", Direction = ParameterDirection.Input, Value = p3, OracleDbType = OracleDbType.Varchar2, Size = p3.Length }; OracleParameter param4 = new OracleParameter() { ParameterName = "param4", Direction = ParameterDirection.Output, OracleDbType = OracleDbType.Decimal }; OracleParameter param5 = new OracleParameter() { ParameterName = "param5", Direction = ParameterDirection.Output, OracleDbType = OracleDbType.Decimal}; OracleParameter param6 = new OracleParameter() { ParameterName = "param6", Direction = ParameterDirection.Output, OracleDbType = OracleDbType.Decimal }; OracleParameter param7 = new OracleParameter() { ParameterName = "param7", Direction = ParameterDirection.Output, OracleDbType = OracleDbType.Varchar2, Size = 32767 }; command.Parameters.Add(param1); command.Parameters.Add(param2); command.Parameters.Add(param3); command.Parameters.Add(param4); command.Parameters.Add(param5); command.Parameters.Add(param6); command.Parameters.Add(param7); command.ExecuteNonQuery(); Any ideas what I'm doing wrong?

    Read the article

  • Error executing child request for handler in plugin

    - by user1348351
    I'm using nop commerce open source. I wanted to show the recently add products in home page. so what I did is I activated plugin Nop JCarousel in the admin panel. But if I select "Recently view product" as a Data source type it is working fine.But if I select "recently add product" Data source type there is error coming up. it says Server Error in '/' Application. Method not found: 'Nop.Core.IPagedList`1<Nop.Core.Domain.Catalog.Product> Nop.Services.Catalog.IProductService.SearchProducts(Int32, Int32, System.Nullable`1<Boolean>, System.Nullable`1<System.Decimal>, System.Nullable`1<System.Decimal>, Int32, System.String, Boolean, Int32, System.Collections.Generic.IList`1<Int32>, Nop.Core.Domain.Catalog.ProductSortingEnum, Int32, Int32, Boolean, System.Collections.Generic.IList`1<Int32> ByRef, Boolean)'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.MissingMethodException: Method not found: 'Nop.Core.IPagedList`1<Nop.Core.Domain.Catalog.Product> Nop.Services.Catalog.IProductService.SearchProducts(Int32, Int32, System.Nullable`1<Boolean>, System.Nullable`1<System.Decimal>, System.Nullable`1<System.Decimal>, Int32, System.String, Boolean, Int32, System.Collections.Generic.IList`1<Int32>, Nop.Core.Domain.Catalog.ProductSortingEnum, Int32, Int32, Boolean, System.Collections.Generic.IList`1<Int32> ByRef, Boolean)'. Source Error: Line 3: @foreach (var widget in Model) Line 4: { Line 5: @Html.Action(widget.ActionName, widget.ControllerName, widget.RouteValues) Line 6: } Any idea on how to solve this?

    Read the article

  • Oracle Data Provider and casting

    - by mrjoltcola
    I use Oracle's specific data provider, not the Microsoft provider that is being discontinued. The thing I've found about ODP.NET is how picky it is with data types. Where JDBC and other ADO providers just convert and make things work, ODP.NET will throw an invalid cast exception unless you get it exactly right. Consider this code: String strSQL = "SELECT DOCUMENT_SEQ.NEXTVAL FROM DUAL"; OracleCommand cmd = new OracleCommand(strSQL, conn); reader = cmd.ExecuteReader(); if (reader != null && reader.Read()) { Int64 id = reader.GetInt64(0); return id; } Due to ODP.NET's pickiness on conversion, this doesn't work. My usual options are: 1) Retrieve into a Decimal and return it with a cast to an Int64 (I don't like this because Decimal is just overkill, and at least once I remember reading it was deprecated...) Decimal id = reader.GetDecimal(0); return (Int64)id; 2) Or cast in the SQL statement to make sure it fits into Int64, like NUMBER(18) String strSQL = "SELECT CAST(DOCUMENT_SEQ.NEXTVAL AS NUMBER(18)) FROM DUAL"; I do (2), because I feel its just not clean pulling a number into a .NET Decimal when my domain types are Int32 or Int64. Other providers I've used are nice (smart) enough to do the conversion on the fly. Any suggestions from the ODP.NET gurus?

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >