Search Results

Search found 60181 results on 2408 pages for 'meta data'.

Page 607/2408 | < Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >

  • How to customize and reuse a DataGridColumnHeader style?

    - by instcode
    Hi all, I'm trying to customize the column headers of a DataGrid to show sub-column headers as in the following screenshot: I've made a style for 2 sub-column as in the following XAML: <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:data="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data" xmlns:primitives="clr-namespace:System.Windows.Controls.Primitives;assembly=System.Windows.Controls.Data" xmlns:sl="clr-namespace:UI" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" x:Class="UI.ColumnHeaderGrid" mc:Ignorable="d"> <UserControl.Resources> <Style x:Key="SplitColumnHeaderStyle" TargetType="primitives:DataGridColumnHeader"> <Setter Property="Foreground" Value="#FF000000"/> <Setter Property="HorizontalContentAlignment" Value="Center"/> <Setter Property="VerticalContentAlignment" Value="Center"/> <Setter Property="IsTabStop" Value="False"/> <Setter Property="SeparatorBrush" Value="#FFC9CACA"/> <Setter Property="Padding" Value="4"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="primitives:DataGridColumnHeader"> <Grid x:Name="Root"> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition Width="Auto"/> </Grid.ColumnDefinitions> <Rectangle x:Name="BackgroundRectangle" Fill="#FF1F3B53" Stretch="Fill" Grid.ColumnSpan="2"/> <Rectangle x:Name="BackgroundGradient" Stretch="Fill" Grid.ColumnSpan="2"> <Rectangle.Fill> <LinearGradientBrush EndPoint=".7,1" StartPoint=".7,0"> <GradientStop Color="#FCFFFFFF" Offset="0.015"/> <GradientStop Color="#F7FFFFFF" Offset="0.375"/> <GradientStop Color="#E5FFFFFF" Offset="0.6"/> <GradientStop Color="#D1FFFFFF" Offset="1"/> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition Width="1"/> <ColumnDefinition/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <TextBlock Grid.Row="0" Grid.ColumnSpan="3" Text="Headers" TextAlignment="Center"/> <Rectangle Grid.Row="1" Grid.ColumnSpan="3" Fill="{TemplateBinding SeparatorBrush}" Height="1"/> <TextBlock Grid.Row="2" Grid.Column="0" Text="Header 1" TextAlignment="Center"/> <Rectangle Grid.Row="2" Grid.Column="1" Fill="{TemplateBinding SeparatorBrush}" Width="1"/> <TextBlock Grid.Row="2" Grid.Column="2" Text="Header 2" TextAlignment="Center"/> <Path x:Name="SortIcon" Grid.Column="2" Fill="#FF444444" Stretch="Uniform" HorizontalAlignment="Left" Margin="4,0,0,0" VerticalAlignment="Center" Width="8" Opacity="0" RenderTransformOrigin=".5,.5" Data="F1 M -5.215,6.099L 5.215,6.099L 0,0L -5.215,6.099 Z "/> </Grid> <Rectangle x:Name="VerticalSeparator" Fill="{TemplateBinding SeparatorBrush}" VerticalAlignment="Stretch" Width="1" Visibility="{TemplateBinding SeparatorVisibility}" Grid.Column="1"/> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> </UserControl.Resources> <data:DataGrid x:Name="LayoutRoot"> <data:DataGrid.Columns> <data:DataGridTemplateColumn HeaderStyle="{StaticResource SplitColumnHeaderStyle}"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition/> </Grid.ColumnDefinitions> <Border Grid.Column="0" BorderBrush="#FFC9CACA" BorderThickness="0,0,0,0"> <TextBlock Grid.Column="0" Text="{Binding GridData.Column1}"/> </Border> <Border Grid.Column="1" BorderBrush="#FFC9CACA" BorderThickness="1,0,0,0"> <TextBlock Grid.Column="0" Text="{Binding GridData.Column2}"/> </Border> </Grid> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> </data:DataGrid.Columns> </data:DataGrid> Now I want to reuse & extend this style to support 2-6 sub-column headers but I don't know if there is a way to do this, like ContentPresenter "overriding": <Style x:Key="SplitColumnHeaderStyle" TargetType="primitives:DataGridColumnHeader"> <Setter property="Template"> <Setter.Value> ... <ContentPresenter Content="{TemplateBinding Content}".../> ... </Setter.Value> </Setter> </Style> <Style x:Key="TwoSubColumnHeaderStyle" BasedOn="SplitColumnHeaderStyle"> <Setter property="Content"> <Setter.Value> <Grid 2x2.../> </Setter.Value> </Setter> </Style> <Style x:Key="ThreeSubColumnHeaderStyle" BasedOn="SplitColumnHeaderStyle"> <Setter property="Content"> <Setter.Value> <Grid 2x3.../> </Setter.Value> </Setter> </Style> Anyway, please help me on these issues: Given the template above, how to support more sub-column headers without having to create new new new new template for each? Assume that the issue above is solved. How could I attach column names outside the styles? I see that some parts, properties & visualization rules in the XAML are just copies from the original Silverlight component's style, i.e. BackgroundGradient, BackgroundRectangle, VisualStateManager... They must be there in order to support default behaviors or effects but... does anyone know how to remove them, but keep all the default behaviors/effects? Please be specific because I'm just getting start with C# & Silverlight. Thanks.

    Read the article

  • VARCHAR does not work as expected in Apache Derby

    - by Tom Brito
    I'm having this same problem: How can I truncate a VARCHAR to the table field length AUTOMATICALLY in Derby using SQL? To be specific: CREATE TABLE A ( B VARCHAR(2) ); INSERT INTO A B VALUES ('1234'); would throw a SQLException: A truncation error was encountered trying to shrink VARCHAR '123' to length 2. that is already answered: No. You should chop it off after checking the meta-data. Or if you don't wanna check the meta-data everytime, then you must keep both your code and database in sync. But thats not a big deal, its a usual practice in validators. but my doubt is: isn't VARCHAR suppose to variate its size to fit the data? What's wrong with apache derby's VARCHAR?

    Read the article

  • Having trouble on mouse over tabs

    - by user225269
    I downloaded a webpage template from the internet because I don't know how to design webpage on photoshop. This was the one I downloaded: http://www.freewebtemplates.com/download/templates/9839 And modified it. And I have this code for mouse over tabs from dynamic drive. But doesn't seem to be working with the template that I downloaded. Here is my current code: <script src="mouseovertabs.js" type="text/javascript"> </script> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <title>Designed by Web Page Templates</title> <meta name="keywords" content="" /> <meta name="description" content="" /> <link href="default.css" rel="stylesheet" type="text/css" /> </head> <body> <table border="0" align="center" cellpadding="0" cellspacing="0" class="bg1"> <tr> <td class="text1" style="height: 50px;">xd627 information management system</td> </tr> <tr> <div id="mytabsmenu" class="tabsmenuclass"> <td class="bg5"><table border="0" cellspacing="0" cellpadding="0" style="height: 62px; padding-top: 15px;"> <tr align="center"> <td><ul><li><a href="index.html" class="link1">Homepage</a></li></td> <td><li><a href="RegStuds.php" class="link1">Database</a></li></td> <td><li><a href="#" class="link1">About</a></li> </ul></td> <a href="submenucontents.htm" style="visibility:hidden">Sub Menu contents</a> <div id="mysubmenuarea" class="tabsmenucontentclass"> <!--1st link within submenu container should point to the external submenu contents file--> <a href="submenucontents.htm" style="visibility:hidden">Sub Menu contents</a> </div> <script type="text/javascript"> //mouseovertabsmenu.init("tabs_container_id", "submenu_container_id", "bool_hidecontentsmouseout") mouseovertabsmenu.init("mytabsmenu", "mysubmenuarea", true) </script> </div> What might be wrong here,its working perfectly with my previous one, but with no layout at all: <script src="mouseovertabs.js" type="text/javascript"> /*********************************************** * Mouseover Tabs Menu- (c) Dynamic Drive DHTML code library (www.dynamicdrive.com) * This notice MUST stay intact for legal use * Visit Dynamic Drive at http://www.dynamicdrive.com/ for this script and 100s more ***********************************************/ </script> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> <div id="mytabsmenu" class="tabsmenuclass"> <ul> <li><a href="" rel="gotsubmenu[selected]">Database Manipulation</a></li> <li><a href="" rel="gotsubmenu" >Register User</a></li> <li><a href="loginform2.php" rel="gotsubmenu" >Logout</a></li> <li><a href=""></a></li> </ul> </div> <div id="mysubmenuarea" class="tabsmenucontentclass"> <!--1st link within submenu container should point to the external submenu contents file--> <a href="submenucontents.htm" style="visibility:hidden">Sub Menu contents</a> </div> <script type="text/javascript"> //mouseovertabsmenu.init("tabs_container_id", "submenu_container_id", "bool_hidecontentsmouseout") mouseovertabsmenu.init("mytabsmenu", "mysubmenuarea", true) </script> </body> </html>

    Read the article

  • JQuery autocomplete: is not working asp.net

    - by Abu Hamzah
    is that something wrong in the below code? its not firing at all edit <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="HostPage.aspx.cs" Inherits="HostPage" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Untitled Page</title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js"></script> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/jquery-ui.min.js"></script> <link rel="stylesheet" href="http://dev.jquery.com/view/trunk/plugins/autocomplete/jquery.autocomplete.css" type="text/css" /> <script type="text/javascript" src="http://dev.jquery.com/view/trunk/plugins/autocomplete/jquery.autocomplete.js"></script> </head> <body> <form id="form1" runat="server"> <script type="text/javascript"> $(document).ready(function() { $("#<%=txtHost.UniqueID %>").autocomplete("HostService.asmx/GetHosts", { dataType: 'json' , contentType: "application/json; charset=utf-8" , parse: function(data) { var rows = Array(); debugger for (var i = 0; i < data.length; i++) { rows[i] = { data: data[i], value: data[i].LName, result: data[i].LName }; } return rows; } , formatItem: function(row, i, max) { return data.LName + ", " + data.FName; } }); }); </script> <div> <asp:Label runat="server" ID='Label4' >Host Name:</asp:Label> <asp:TextBox ID="txtHost" runat='server'></asp:TextBox> <p> </div> </form> </body> </html>

    Read the article

  • Managing the interval for horizontal axis in flex

    - by Roshan
    Hi Guys, How can we manage the horizontalaxis interval in flex chart? What actually happening is , the data is inserted between two interval levels and its causing readability problem when we draw line grids in graph. The data point is shown in between the data grids. How can we move the axis or manage the data points?

    Read the article

  • Why can I not view foreign language characters in my mysql DB?

    - by Chris
    I am inserting the following characters into my DB: ?? / ?? This is the meta tag on the page that is inserting the characters: <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> I have altered all the columns in my table that is holding the characters to be utf8_unicode_ci The foreign characters show up like so in the DB: 汉字 / 漢字 When I use a sql statement to display those foreign characters on a page, they display correctly again as: ?? / ?? I am guessing I have some setting that is not correct in my DB, since it stores it correctly, but does not display it correctly. What can i do to make the foreign language characters to display correctly in my DB?

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • Variable sized packet structs with vectors

    - by Rev316
    Lately I've been diving into network programming, and I'm having some difficulty constructing a packet with a variable "data" property. Several prior questions have helped tremendously, but I'm still lacking some implementation details. I'm trying to avoid using variable sized arrays, and just use a vector. But I can't get it to be transmitted correctly, and I believe it's somewhere during serialization. Now for some code. Packet Header class Packet { public: void* Serialize(); bool Deserialize(void *message); unsigned int sender_id; unsigned int sequence_number; std::vector<char> data; }; Packet ImpL typedef struct { unsigned int sender_id; unsigned int sequence_number; std::vector<char> data; } Packet; void* Packet::Serialize(int size) { Packet* p = (Packet *) malloc(8 + 30); p->sender_id = htonl(this->sender_id); p->sequence_number = htonl(this->sequence_number); p->data.assign(size,'&'); //just for testing purposes } bool Packet::Deserialize(void *message) { Packet *s = (Packet*)message; this->sender_id = ntohl(s->sender_id); this->sequence_number = ntohl(s->sequence_number); this->data = s->data; } During execution, I simply create a packet, assign it's members, and send/receive accordingly. The above methods are only responsible for serialization. Unfortunately, the data never gets transferred. Couple of things to point out here. I'm guessing the malloc is wrong, but I'm not sure how else to compute it (i.e. what other value it would be). Other than that, I'm unsure of the proper way to use a vector in this fashion, and would love for someone to show me how (code examples please!) :) Edit: I've awarded the question to the most comprehensive answer regarding the implementation with a vector data property. Appreciate all the responses!

    Read the article

  • Can a View be returned as a JSON object in ASP.Net MVC

    - by Chev
    I want to know if it is possibe to return a view as a JSON object. In my controller I want to do something like the following: [AcceptVerbs("Post")] public JsonResult SomeActionMethod() { return new JsonResult { Data = new { success = true, view = PartialView("MyPartialView") } }; } In html: $.post($(this).attr('action'), $(this).serialize(), function(Data) { alert(Data.success); $("#test").replaceWith(Data.view); }); Any feedback greatly appreciated.

    Read the article

  • Hibernate generate POJOs with Equals

    - by jschoen
    We are using hibernate in a new project where we use the hibernate.reveng.xml to create our *.hbm.xml files and POJOs after that. We want to have equals methods in each of our POJOs. I found that you can use <meta attribute="use-in-equals">true</meta> in your hbm files to mark which properties to use in the equals. But this would mean editing alot of files, and then re-editing the files again in the future if/when we modify tables or columns in our DB. So I was wondering if there is a way to place which properties to use in the equals method for each pojo(table) in the hibernate.reveng.xml file?

    Read the article

  • derby + hibernate ConstraintViolationException using manytomany relationships

    - by user364470
    Hi, I'm new to Hibernate+Derby... I've seen this issue mentioned throughout the google, but have not seen a proper resolution. This following code works fine with mysql, but when I try this on derby i get exceptions: ( each Tag has two sets of files and vise-versa - manytomany) Tags.java @Entity @Table(name="TAGS") public class Tags implements Serializable { @Id @GeneratedValue(strategy=GenerationType.AUTO) public long getId() { return id; } @ManyToMany(targetEntity=Files.class ) @ForeignKey(name="USER_TAGS_FILES",inverseName="USER_FILES_TAGS") @JoinTable(name="USERTAGS_FILES", joinColumns=@JoinColumn(name="TAGS_ID"), inverseJoinColumns=@JoinColumn(name="FILES_ID")) public Set<data.Files> getUserFiles() { return userFiles; } @ManyToMany(mappedBy="autoTags", targetEntity=data.Files.class) public Set<data.Files> getAutoFiles() { return autoFiles; } Files.java @Entity @Table(name="FILES") public class Files implements Serializable { @Id @GeneratedValue(strategy=GenerationType.AUTO) public long getId() { return id; } @ManyToMany(mappedBy="userFiles", targetEntity=data.Tags.class) public Set getUserTags() { return userTags; } @ManyToMany(targetEntity=Tags.class ) @ForeignKey(name="AUTO_FILES_TAGS",inverseName="AUTO_TAGS_FILES") @JoinTable(name="AUTOTAGS_FILES", joinColumns=@JoinColumn(name="FILES_ID"), inverseJoinColumns=@JoinColumn(name="TAGS_ID")) public Set getAutoTags() { return autoTags; } I add some data to the DB, but when running over Derby these exception turn up (the don't using mysql) Exceptions SEVERE: DELETE on table 'FILES' caused a violation of foreign key constraint 'USER_FILES_TAGS' for key (3). The statement has been rolled back. Jun 10, 2010 9:49:52 AM org.hibernate.event.def.AbstractFlushingEventListener performExecutions SEVERE: Could not synchronize database state with session org.hibernate.exception.ConstraintViolationException: could not delete: [data.Files#3] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:96) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:2712) at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:2895) at org.hibernate.action.EntityDeleteAction.execute(EntityDeleteAction.java:97) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:268) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:260) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:184) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1206) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:613) at org.hibernate.context.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:344) at $Proxy13.flush(Unknown Source) at data.HibernateORM.removeFile(HibernateORM.java:285) at data.DataImp.removeFile(DataImp.java:195) at booting.DemoBootForTestUntilTestClassesExist.main(DemoBootForTestUntilTestClassesExist.java:62) I have never used derby before so maybe there is something crutal that i'm missing 1) what am I doing wrong? 2) is there any way of cascading properly when I have 2 many-to-many relationships between two classes? Thanks!

    Read the article

  • Cloud security and privacy

    - by Rakesh K
    Hi, I have a very basic doubt regarding cloud computing that is catching up pretty fast these days. To my understanding, cloud computing is a paradigm in which companies put up their data and applications on somebody else's machines aka 'The Cloud'. I want to know just how secure is it to put up my data on some third party machines, especially if my data contains private details. In particular, how can an enterprise trust the cloud computing service providers in this data privacy aspect? Thanks, rakesh.

    Read the article

  • Can't log in: Error occurred while sending a direct message or getting the response

    - by Joshua Gitlin
    This belongs on Meta but I can't ask it there since, well, I can't log in :-) I'm unable to log into my account using my OpenID, josh.gitlin.name on either StackOverflow or Meta. The error message I receive after entering my OpenID on the login page and pressing "Login" is: Unable to log in with your OpenID provider: Error occurred while sending a direct message or getting the response My alternate OpenID hmblprogrammer.pip.verisignlabs.com doesn't work either. Anything I can try? (And is there any way this question can be associated with my account even though I'm not logged in?)

    Read the article

  • Regex Searching in Emacs

    - by Inaimathi
    I'm trying to write some Elisp code to format a bunch of legacy files. The idea is that if a file contains a section like "<meta name=\"keywords\" content=\"\\(.*?\\)\" />", then I want to insert a section that contains existing keywords. If that section is not found, I want to insert my own default keywords into the same section. I've got the following function: (defun get-keywords () (re-search-forward "<meta name=\"keywords\" content=\"\\(.*?\\)\" />") (goto-char 0) ;The section I'm inserting will be at the beginning of the file (or (march-string 1) "Rubber duckies and cute ponies")) ;;or whatever the default keywords are When the function fails to find its target, it returns Search failed: "[regex here]" and prevents the rest of evaluation. Is there a way to have it return the default string, and ignore the error?

    Read the article

  • Double Buffering for Game objects, what's a nice clean generic C++ way?

    - by Gary
    This is in C++. So, I'm starting from scratch writing a game engine for fun and learning from the ground up. One of the ideas I want to implement is to have game object state (a struct) be double-buffered. For instance, I can have subsystems updating the new game object data while a render thread is rendering from the old data by guaranteeing there is a consistent state stored within the game object (the data from last time). After rendering of old and updating of new is finished, I can swap buffers and do it again. Question is, what's a good forward-looking and generic OOP way to expose this to my classes while trying to hide implementation details as much as possible? Would like to know your thoughts and considerations. I was thinking operator overloading could be used, but how do I overload assign for a templated class's member within my buffer class? for instance, I think this is an example of what I want: doublebuffer<Vector3> data; data.x=5; //would write to the member x within the new buffer int a=data.x; //would read from the old buffer's x member data.x+=1; //I guess this shouldn't be allowed If this is possible, I could choose to enable or disable double-buffering structs without changing much code. This is what I was considering: template <class T> class doublebuffer{ T T1; T T2; T * current=T1; T * old=T2; public: doublebuffer(); ~doublebuffer(); void swap(); operator=()?... }; and a game object would be like this: struct MyObjectData{ int x; float afloat; } class MyObject: public Node { doublebuffer<MyObjectData> data; functions... } What I have right now is functions that return pointers to the old and new buffer, and I guess any classes that use them have to be aware of this. Is there a better way?

    Read the article

  • Inference engine to calculate matching set according to internal rules

    - by Zecrates
    I have a set of objects with attributes and a bunch of rules that, when applied to the set of objects, provides a subset of those objects. To make this easier to understand I'll provide a concrete example. My objects are persons and each has three attributes: country of origin, gender and age group (all attributes are discrete). I have a bunch of rules, like "all males from the US", which correspond with subsets of this larger set of objects. I'm looking for either an existing Java "inference engine" or something similar, which will be able to map from the rules to a subset of persons, or advice on how to go about creating my own. I have read up on rule engines, but that term seems to be exclusively used for expert systems that externalize the business rules, and usually doesn't include any advanced form of inferencing. Here are some examples of the more complex scenarios I have to deal with: I need the conjunction of rules. So when presented with both "include all males" and "exclude all US persons in the 10 - 20 age group," I'm only interested in the males outside of the US, and the males within the US that are outside the 10 - 20 age group. Rules may have different priorities (explicitly defined). So a rule saying "exclude all males" will override a rule saying "include all US males." Rules may be conflicting. So I could have both an "include all males" and an "exclude all males" in which case the priorities will have to settle the issue. Rules are symmetric. So "include all males" is equivalent to "exclude all females." Rules (or rather subsets) may have meta rules (explicitly defined) associated with them. These meta rules will have to be applied in any case that the original rule is applied, or if the subset is reached via inferencing. So if a meta rule of "exclude the US" is attached to the rule "include all males", and I provide the engine with the rule "exclude all females," it should be able to inference that the "exclude all females" subset is equivalent to the "include all males" subset and as such apply the "exclude the US" rule additionally. I can in all likelihood live without item 5, but I do need all the other properties mentioned. Both my rules and objects are stored in a database and may be updated at any stage, so I'd need to instantiate the 'inference engine' when needed and destroy it afterward.

    Read the article

  • Partition table corrupted (USB flash drive)

    - by 13ren
    It's an 8 GB Patriot thumb drive, which I've used extensively with lots of data. Today, it is detected, but all data is gone: (EDIT at least some data is still there, but the partition table is gone) EDIT @Sathya (thanks) here's the relevant output from sudo fdisk -l: Disk /dev/sdc: 8019 MB, 8019509248 bytes 247 heads, 62 sectors/track, 1022 cylinders Units = cylinders of 15314 * 512 = 7840768 bytes Disk /dev/sdc doesn't contain a valid partition table It looks like it is /dev/sdc, with that 8 GB... and no partition table. I tried to mount /dev/sdc (and then dmesg | tail): /media> sudo mount /dev/sdc mytmp mount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so /media> dmesg | tail [ 24.300000] sdc: unknown partition table [ 24.320000] sd 2:0:0:0: Attached scsi removable disk sdc [ 24.370000] usb-storage: device scan complete [ 26.870000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 26.870000] EXT2-fs: group descriptors corrupted! [ 50.420000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 5565.470000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 5565.470000] EXT2-fs: group descriptors corrupted! EDIT @Col: results from testdisk Disk /dev/sdc - 8013 MB / 7642 MiB - CHS 1022 247 62 Current partition structure: Partition Start End Size in sectors Partition sector doesn't have the endmark 0xAA55 After I hit [proceed], it says: Structure: Ok. Keys A: add partition, L: load backup, Enter: to continue The "Structure: Ok." seems reassuring... will "A: add partition" make my old data accessible (if it's still there), or will it make a new, fresh partition? Another option is "[ MBR Code ] Write TestDisk MBR code to first sector" - would it be better to do this? EDIT I found that at least some of my data is still on the flash drive, by using the below, and searching for English text in less (like " the "): cat /dev/sde | tr -cd '\11\12\40\1540-\176' | less (The drive changed from "/dev/sdb" to "/dev/sde" because I connected some extra drives today). I've learnt that "/dev/sde1" would be the first partition; and "/dev/sde" is the whole drive. Because unix treats these devices just like files, you can use all the ordinary unix file commands on them, like cat, and then process them like any other stream of data. The tr above removes non-printable characters ("\40" is space, which I wanted to preserve). In less, you can use "/" to search, similar to Vim. How can I get my data back (assuming it's still there)? If only the partition table is corrupted, is there a standard "partition recovery tool"? Is there a way to "repartition" without deleting everything?

    Read the article

  • Still confuse parse JSON in GWT

    - by graybow
    Please help meee. I create a project named 'tesdb3' in eclipse. I create the PHP side to access the database, and made the output as JSON.. I create the userdata.php in folder war. then I compile tesdb3 project. Folder tesdb3 and the userdata.php in war moved in local server(I use WAMP). I put the PHP in folder tesdb3. This is the result from my localhost/phpmyadmin/tesdb3/userdata.php [{"kode":"002","nama":"bambang gentolet"},{"kode":"012","nama":"Algiz"}] From that result I think the PHP side was working good.Then I create UserData.java as JSNI overlay like this: package com.tesdb3.client; import com.google.gwt.core.client.JavaScriptObject; class UserData extends JavaScriptObject{ protected UserData() {} public final native String getKode() /*-{ return this.kode; }-*/; public final native String getNama() /*-{ return this.nama; }-*/; public final String getFullData() { return getKode() + ":" + getNama(); } } Then Finally in the tesdb3.java: public class Tesdb3 implements EntryPoint { String url= "http://localhost/phpmyadmin/tesdb3/datauser.php"; private native JsArray<UserData> getuserdata(String json) /*-{ return eval(json); }-*/; public void LoadData() throws RequestException{ RequestBuilder builder = new RequestBuilder(RequestBuilder.GET, URL.encode(url)); builder.sendRequest(null, new RequestCallback(){ @Override public void onError(Request request, Throwable exception) { Window.alert("error " + exception); } public void onResponseReceived(Request request, Response response) { Window.alert("betul" + response.getText()); //data(getuserdata(response.getText())); } }); } public void data(JsArray<UserData> data){ for (int i = 0; i < data.length(); i++) { String lkode =data.get(i).getKode(); String lname =data.get(i).getNama(); Label l = new Label(lkode+" "+lname); tb.setWidget(i, 0, l); } RootPanel.get().add(new HTML("my data")); RootPanel.get().add(tb); } public void onModuleLoad() { try { LoadData(); } catch (RequestException e) { } } } The result just showing string "my data". And the Window.alert(response.getText()) showing nothing. Whyy?

    Read the article

  • How to attach multiple files to a Wordpress post?

    - by erik.brannstrom
    I'm currently working on a project where we are using Wordpress 3.0 RC. The idea is to create custom post types with meta boxes to make the system easier to use for the client. Some of these posts need to have multiple files attached to them, and by attached I do not mean inserted to post but rather we'd like to keep them separate from the post body (in fact, a given post type might not even have text, only files). I'm wondering if there is a standard approach for allowing multiple files to be attached to a Wordpress post? I've managed to add a meta box that allows one file from the media library to be selected, but I have no idea how to extend this to allow an arbitrary number of files. Hope someone can help!

    Read the article

  • Dynamically Forming a JSON object traversion without using eval.

    - by Matt Willhite
    Given I have the following: (which is dynamically generated and varies in length) associations = ["employer", "address"]; Trying to traverse the JSON object, and wanting to form something like the following: data.employer.address or data[associations[0]][association[1]] Without doing this: eval("data."+associations.join('.')); Finally, I may be shunned for saying this, but is it okay to use eval in an instance like this? Just retrieving data.

    Read the article

  • Image rescale and write rescaled image file in blackberry

    - by Karthick
    I am using the following code to resize and save the file in to the blackberry device. After image scale I try to write image file into device. But it gives the same data. (Height and width of the image are same).I have to make rescaled image file.Can anyone help me ??? class ResizeImage extends MainScreen implements FieldChangeListener { private String path="file:///SDCard/BlackBerry/pictures/test.jpg"; private ButtonField btn; ResizeImage() { btn=new ButtonField("Write File"); btn.setChangeListener(this); add(btn); } public void fieldChanged(Field field, int context) { if (field == btn) { try { InputStream inputStream = null; //Get File Connection FileConnection fileConnection = (FileConnection) Connector.open(path); if (fileConnection.exists()) { inputStream = fileConnection.openInputStream(); //byte data[]=inputStream.toString().getBytes(); ByteArrayOutputStream baos = new ByteArrayOutputStream(); int j = 0; while((j=inputStream.read()) != -1) { baos.write(j); } byte data[] = baos.toByteArray(); inputStream.close(); fileConnection.close(); WriteFile("file:///SDCard/BlackBerry/pictures/org_Image.jpg",data); EncodedImage eImage = EncodedImage.createEncodedImage(data,0,data.length); int scaleFactorX = Fixed32.div(Fixed32.toFP(eImage.getWidth()), Fixed32.toFP(80)); int scaleFactorY = Fixed32.div(Fixed32.toFP(eImage.getHeight()), Fixed32.toFP(80)); eImage=eImage.scaleImage32(scaleFactorX, scaleFactorY); WriteFile("file:///SDCard/BlackBerry/pictures/resize.jpg",eImage.getData()); BitmapField bit=new BitmapField(eImage.getBitmap()); add(bit); } } catch(Exception e) { System.out.println("Exception is ==> "+e.getMessage()); } } } void WriteFile(String fileName,byte[] data) { FileConnection fconn = null; try { fconn = (FileConnection) Connector.open(fileName,Connector.READ_WRITE); } catch (IOException e) { System.out.print("Error opening file"); } if (fconn.exists()) try { fconn.delete(); } catch (IOException e) { System.out.print("Error deleting file"); } try { fconn.create(); } catch (IOException e) { System.out.print("Error creating file"); } OutputStream out = null; try { out = fconn.openOutputStream(); } catch (IOException e) { System.out.print("Error opening output stream"); } try { out.write(data); } catch (IOException e) { System.out.print("Error writing to output stream"); } try { fconn.close(); } catch (IOException e) { System.out.print("Error closing file"); } } }

    Read the article

  • how much time does grid.py take to run ?

    - by trinity
    Hello all , I am using libsvm for binary classification.. I wanted to try grid.py , as it is said to improve results.. I ran this script for five files in separate terminals , and the script has been running for more than 12 hours.. this is the state of my 5 terminals now : [root@localhost tools]# python grid.py sarts_nonarts_feat.txt>grid_arts.txt Warning: empty z range [61.3997:61.3997], adjusting to [60.7857:62.0137] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [61.3997:61.3997], adjusting to [60.7857:62.0137] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sgames_nongames_feat.txt>grid_games.txt Warning: empty z range [64.5867:64.5867], adjusting to [63.9408:65.2326] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [64.5867:64.5867], adjusting to [63.9408:65.2326] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sref_nonref_feat.txt>grid_ref.txt Warning: empty z range [62.4602:62.4602], adjusting to [61.8356:63.0848] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [62.4602:62.4602], adjusting to [61.8356:63.0848] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sbiz_nonbiz_feat.txt>grid_biz.txt Warning: empty z range [67.9762:67.9762], adjusting to [67.2964:68.656] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [67.9762:67.9762], adjusting to [67.2964:68.656] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py snews_nonnews_feat.txt>grid_news.txt Wrong input format at line 494 Traceback (most recent call last): File "grid.py", line 223, in run if rate is None: raise "get no rate" TypeError: exceptions must be classes or instances, not str I had redirected the outputs to files , but those files for now contain nothing.. And , the following files were created : sbiz_nonbiz_feat.txt.out sbiz_nonbiz_feat.txt.png sarts_nonarts_feat.txt.out sarts_nonarts_feat.txt.png sgames_nongames_feat.txt.out sgames_nongames_feat.txt.png sref_nonref_feat.txt.out sref_nonref_feat.txt.png snews_nonnews_feat.txt.out (-- is empty ) There's just one line of information in .out files.. the ".png" files are some GNU PLOTS . But i dont understand what the above GNUplots / warnings convey .. Should i re-run them ? Can anyone please tell me on how much time this script might take if each input file contains about 144000 lines.. Thanks and regards

    Read the article

  • Django loaddata throws ValidationError: [u'Enter a valid date in YYYY-MM-DD format.'] on null=true f

    - by datakid
    When I run: django-admin.py loaddata ../data/library_authors.json the error is: ... ValidationError: [u'Enter a valid date in YYYY-MM-DD format.'] The model: class Writer(models.Model): first = models.CharField(u'First Name', max_length=30) other = models.CharField(u'Other Names', max_length=30, blank=True) last = models.CharField(u'Last Name', max_length=30) dob = models.DateField(u'Date of Birth', blank=True, null=True) class Meta: abstract = True ordering = ['last'] unique_together = ("first", "last") class Author(Writer): language = models.CharField(max_length=20, choices=LANGUAGES, blank=True) class Meta: verbose_name = 'Author' verbose_name_plural = 'Authors' Note that the dob DateField has blank=True, null=True The json file has structure: [ { "pk": 1, "model": "books.author", "fields": { "dob": "", "other": "", "last": "Carey", "language": "", "first": "Peter" } }, { "pk": 3, "model": "books.author", "fields": { "dob": "", "other": "", "last": "Brown", "language": "", "first": "Carter" } } ] The backing mysql database has the relevent date field in the relevant table set to NULL as default and Null? = YES. Any ideas on what I'm doing wrong or how I can get loaddata to accept null date values?

    Read the article

  • Cross-thread operation not valid: Control accessed from a thread other than the thread it was create

    - by SilverHorse
    I have a scenario. (Windows Forms, C#, .NET) There is a main form which hosts some user control. The user control does some heavy data operation, such that if I directly call the Usercontrol_Load method the UI become nonresponsive for the duration for load method execution. To overcome this I load data on different thread (trying to change existing code as little as I can) I used a background worker thread which will be loading the data and when done will notify the application that it has done its work. Now came a real problem. All the UI (main form and its child usercontrols) was created on the primary main thread. In the LOAD method of the usercontrol I'm fetching data based on the values of some control (like textbox) on userControl. The pseudocode would look like this: //CODE 1 UserContrl1_LOadDataMethod() { if(textbox1.text=="MyName") <<======this gives exception { //Load data corresponding to "MyName". //Populate a globale variable List<string> which will be binded to grid at some later stage. } } The Exception it gave was Cross-thread operation not valid: Control accessed from a thread other than the thread it was created on. To know more about this I did some googling and a suggestion came up like using the following code //CODE 2 UserContrl1_LOadDataMethod() { if(InvokeRequired) // Line #1 { this.Invoke(new MethodInvoker(UserContrl1_LOadDataMethod)); return; } if(textbox1.text=="MyName") //<<======Now it wont give exception** { //Load data correspondin to "MyName" //Populate a globale variable List<string> which will be binded to grid at some later stage } } BUT BUT BUT... it seems I'm back to square one. The Application again become nonresponsive. It seems to be due to the execution of line #1 if condition. The loading task is again done by the parent thread and not the third that I spawned. I don't know whether I perceived this right or wrong. I'm new to threading. How do I resolve this and also what is the effect of execution of Line#1 if block? The situation is this: I want to load data into a global variable based on the value of a control. I don't want to change the value of a control from the child thread. I'm not going to do it ever from a child thread. So only accessing the value so that the corresponding data can be fetched from the database.

    Read the article

  • When spliting MP4s with ffmpeg how do I include metadata?

    - by Josh
    I have a few MP4s that i want to upload to my flickr account but they have a maximum size of 500mb as mine is only about 550 i was planing to simply split them in half then upload them, but i want to make sure all the meta data is included but it does not seem to be. I have tried each of the following with no luck, (at the end of this post i have the original and the new ffprobe outputs): ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -map_metadata 0:0 SANY0069A.MP4 ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -map_meta_data SANY0069.MP4:SANY0069A.MP4 SANY0069A.MP4 with the this one I manually produced the individual meta tags that i took from this command ffmpeg -i SANY0069A.MP4 -f ffmetadata meta.txt ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -metadata major_brand="mp42" -metadata minor_version="1" -metadata compatible_brands="mp42avc1" -metadata creation_time="2012-09-29 09:05:50" -metadata comment="SANYO DIGITAL CAMERA CA9" -metadata comment-eng="SANYO DIGITAL CAMERA CA9" SANY0069A.MP4 using the output of the former command i also tried this: ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -f ffmetadata -i meta.txt SANY0069A.MP4 Output: sample output from my first command: ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -map_metadata 0:0 SANY0069A.MP4 ffmpeg version 0.8.12, Copyright (c) 2000-2011 the FFmpeg developers built on Jun 13 2012 09:57:38 with gcc 4.6.3 20120306 (Red Hat 4.6.3-2) configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --enable-bzlib --enable-libcelt --enable-libdc1394 --enable-libdirac --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect libavutil 51. 9. 1 / 51. 9. 1 libavcodec 53. 8. 0 / 53. 8. 0 libavformat 53. 5. 0 / 53. 5. 0 libavdevice 53. 1. 1 / 53. 1. 1 libavfilter 2. 23. 0 / 2. 23. 0 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 51. 2. 0 / 51. 2. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'SANY0069.MP4': Metadata: major_brand : mp42 minor_version : 1 compatible_brands: mp42avc1 creation_time : 2012-09-29 09:05:50 comment : SANYO DIGITAL CAMERA CA9 comment-eng : SANYO DIGITAL CAMERA CA9 Duration: 00:08:38.71, start: 0.000000, bitrate: 9142 kb/s Stream #0.0(eng): Video: h264 (Constrained Baseline), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 9007 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc Metadata: creation_time : 2012-09-29 09:05:50 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 2012-09-29 09:05:50 File 'SANY0069A.MP4' already exists. Overwrite ? [y/N] y Output #0, mp4, to 'SANY0069A.MP4': Metadata: major_brand : mp42 minor_version : 1 compatible_brands: mp42avc1 creation_time : 2012-09-29 09:05:50 comment : SANYO DIGITAL CAMERA CA9 comment-eng : SANYO DIGITAL CAMERA CA9 encoder : Lavf53.5.0 Stream #0.0(eng): Video: libx264, yuv420p, 1280x720 [PAR 1:1 DAR 16:9], q=2-31, 9007 kb/s, 30k tbn, 29.97 tbc Metadata: creation_time : 2012-09-29 09:05:50 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, 127 kb/s Metadata: creation_time : 2012-09-29 09:05:50 Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop, [?] for help frame= 7773 fps=4644 q=-1.0 Lsize= 289607kB time=00:04:19.35 bitrate=9147.4kbits/s video:285416kB audio:4033kB global headers:0kB muxing overhead 0.054571% and finaly, when i compare the ffprobe of the original and the first split part i get the 2 following outputs: original ffprobe version 0.8.12, Copyright (c) 2007-2011 the FFmpeg developers built on Jun 13 2012 09:57:38 with gcc 4.6.3 20120306 (Red Hat 4.6.3-2) configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --enable-bzlib --enable-libcelt --enable-libdc1394 --enable-libdirac --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect libavutil 51. 9. 1 / 51. 9. 1 libavcodec 53. 8. 0 / 53. 8. 0 libavformat 53. 5. 0 / 53. 5. 0 libavdevice 53. 1. 1 / 53. 1. 1 libavfilter 2. 23. 0 / 2. 23. 0 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 51. 2. 0 / 51. 2. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'SANY0069.MP4': Metadata: major_brand : mp42 minor_version : 1 compatible_brands: mp42avc1 creation_time : 2012-09-29 09:05:50 comment : SANYO DIGITAL CAMERA CA9 comment-eng : SANYO DIGITAL CAMERA CA9 Duration: 00:08:38.71, start: 0.000000, bitrate: 9142 kb/s Stream #0.0(eng): Video: h264 (Constrained Baseline), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 9007 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc Metadata: creation_time : 2012-09-29 09:05:50 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 2012-09-29 09:05:50 Split ffprobe version 0.8.12, Copyright (c) 2007-2011 the FFmpeg developers built on Jun 13 2012 09:57:38 with gcc 4.6.3 20120306 (Red Hat 4.6.3-2) configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --enable-bzlib --enable-libcelt --enable-libdc1394 --enable-libdirac --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect libavutil 51. 9. 1 / 51. 9. 1 libavcodec 53. 8. 0 / 53. 8. 0 libavformat 53. 5. 0 / 53. 5. 0 libavdevice 53. 1. 1 / 53. 1. 1 libavfilter 2. 23. 0 / 2. 23. 0 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 51. 2. 0 / 51. 2. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'SANY0069A.MP4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf53.5.0 comment : SANYO DIGITAL CAMERA CA9 Duration: 00:04:19.37, start: 0.000000, bitrate: 9146 kb/s Stream #0.0(eng): Video: h264 (Constrained Baseline), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 9015 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 1970-01-01 00:00:00 I know this is incredibly long but its actually a quite simple question. I thought it would be best to provide as much detail as possible. any advice here would be great, Thanks

    Read the article

< Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >