Search Results

Search found 59554 results on 2383 pages for 'distributed data mining'.

Page 603/2383 | < Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >

  • XML Postback issue

    - by Mikey1980
    I have a script that is designed to parse XML postbacks from Ultracart, right now just dumps it into a MySQL table. The script works fine if I point it to a XML file on my localhost but using 'php://input' it doesn't seem to grabbing anything. My logs show apache returning 200 after the post so I have no idea what could be wrong or how to drill down the issue.. here's the code: $doc = new DOMDocument(); $doc->loadXML($page); $handle = fopen("test2/".time().".xml", "w+"); fwrite($handle,trim($page)); // it doesn't save this either :'( fclose(); require_once('includes/database.php'); $db = new Database('localhost', 'user', 'password', 'db_name'); $data = array(); $exports = $doc->getElementsByTagName("export"); foreach ($exports as $export) { $orders = $export->getElementsByTagName("order"); foreach($orders as $order) { $data['order_id'] = $order->getElementsByTagName("order_id")->item(0)->nodeValue; $data['payment_status'] = $order->getElementsByTagName("payment_status")->item(0)->nodeValue; $date_array = explode(" ",$order->getElementsByTagName("payment_date_time")->item(0)->nodeValue); if ($date_array[1] == 'JAN') { $date_array[1] = '01'; } if ($date_array[1] == 'FEB') { $date_array[1] = '02'; } if ($date_array[1] == 'MAR') { $date_array[1] = '03'; } if ($date_array[1] == 'APR') { $date_array[1] = '04'; } if ($date_array[1] == 'MAY') { $date_array[1] = '05'; } // converts Ultracart date to if ($date_array[1] == 'JUN') { $date_array[1] = '06'; } // MySQL date if ($date_array[1] == 'JUL') { $date_array[1] = '07'; } if ($date_array[1] == 'AUG') { $date_array[1] = '08'; } if ($date_array[1] == 'SEP') { $date_array[1] = '09'; } if ($date_array[1] == 'OCT') { $date_array[1] = '10'; } if ($date_array[1] == 'NOV') { $date_array[1] = '11'; } if ($date_array[1] == 'DEC') { $date_array[1] = '12'; } $data['payment_date'] = $date_array[2]."-".$date_array[1]."-".$date_array[0]; $data['payment_time'] = $date_array[3]; //... we'll skip this, there are 80 some elements $data['discount'] = $order->getElementsByTagName("discount")->item(0)->nodeValue; $data['distribution_center_code'] = $order->getElementsByTagName("distribution_center_code")->item(0)->nodeValue; } } } $db->insert('order_history',$data); } else die('ERROR: Token Check Failed!');

    Read the article

  • What is the meaning of ": base" in the constructor definition ?

    - by DotNetBeginner
    What is the meaning of ": base" in the costructor of following class(MyClass) ? Please explain the concept behind constructor definition given below for class MyClass. public class MyClass: WorkerThread { public MyClass(object data): base(data) { // some code } } public abstract class WorkerThread { private object ThreadData; private Thread thisThread; public WorkerThread(object data) { this.ThreadData = data; } public WorkerThread() { ThreadData = null; } }

    Read the article

  • Filtering Security Logs by User and Logon Type

    - by Trido
    I have been asked to find out when a user has logged on to the system in the last week. Now the audit logs in Windows should contain all the info I need. I think if I search for Event ID 4624 (Logon Success) with a specific AD user and Logon Type 2 (Interactive Logon) that it should give me the information I need, but for the life of my I cannot figure out how to actually filter the Event Log to get this information. Is it possible inside of the Event Viewer or do you need to use an external tool to parse it to this level? I found http://nerdsknowbest.blogspot.com.au/2013/03/filter-security-event-logs-by-user-in.html which seemed to be part of what I needed. I modified it slightly to only give me the last 7 days worth. Below is the XML I tried. <QueryList> <Query Id="0" Path="Security"> <Select Path="Security">*[System[(EventID=4624) and TimeCreated[timediff(@SystemTime) &lt;= 604800000]]]</Select> <Select Path="Security">*[EventData[Data[@Name='Logon Type']='2']]</Select> <Select Path="Security">*[EventData[Data[@Name='subjectUsername']='Domain\Username']]</Select> </Query> </QueryList> It only gave me the last 7 days, but the rest of it did not work. Can anyone assist me with this? EDIT Thanks to the suggestions of Lucky Luke I have been making progress. The below is my current query, although as I will explain it isn't returning any results. <QueryList> <Query Id="0" Path="Security"> <Select Path="Security"> *[System[(EventID='4624')] and System[TimeCreated[timediff(@SystemTime) &lt;= 604800000]] and EventData[Data[@Name='TargetUserName']='john.doe'] and EventData[Data[@Name='LogonType']='2'] ] </Select> </Query> </QueryList> As I mentioned, it wasn't returning any results so I have been messing with it a bit. I can get it to produce the results correctly until I add in the LogonType line. After that, it returns no results. Any idea why this might be? EDIT 2 I updated the LogonType line to the following: EventData[Data[@Name='LogonType'] and (Data='2' or Data='7')] This should capture Workstation Logons as well as Workstation Unlocks, but I still get nothing. I then modify it to search for other Logon Types like 3, or 8 which it finds plenty of. This leads me to believe that the query works correctly, but for some reason there are no entries in the Event Logs with Logon Type equalling 2 and this makes no sense to me. Is it possible to turn this off?

    Read the article

  • SSRS CSV Rendering Export Issue....

    - by BALAMURUGAN
    I am having my table like this (2 tables) UserId Name 2 Welcome 3 Rename Password Data 2dfdf dfd 3 dfdf I need the data while exported to csv like this... UserId, Name 2,Welcome 3, Rename, password, Data 2dfdf ,dfd 3 ,dfdf But it is displaying like this UserId, name, Password, Data 1, Welcome, 2dfdf ,dfd 3, Rename,3,dfdf How can I get the actual result

    Read the article

  • What is the difference between Cloud Computing and Grid Computing ?

    - by this. __curious_geek
    Hi, Can you please help me understand the significant differences between Cloud Computing and Grid Computing ? What are the precise definations and target application domains for both ? I'm looking for conceptual insights along with technicalities. Like Windows Azure is a Cloud OS, do we have anytihng such for Grid Computing ? In past I did work on distributed and parallel computing and I used the librariries like PVM and MPI for processing distribution. Out of curiosity I wanted to know If Grid Computing is distributed computing extended over internet ?

    Read the article

  • MVC model flow?

    - by fuzzygoat
    I am setting up an application using the MVC model and I have a query regrading the flow of information from the UI to the data model. What I need to do is place data from the UI in the model, what I have done is write a method in the view which collects the required data in an object and then passes it to the model. The model then takes ownership of the data so that the view can release its ownership. Does this sound sensible?

    Read the article

  • jquery parse json

    - by Eeyore
    I can't parse the JSON that I have no control of. What am I doing wrong here? data.json { "img": "img1.jpg", "img": "img2.jpg", "size": [52, 97] } { "img": "img3.jpg", "img": "img4.jpg", "size": [52, 97] } jquery $.getJSON("data.json", function(data){ $.each(data, function(i,item){ alert(item.img[i]); }); });

    Read the article

  • PHP array include performance

    - by tau
    What type of performance hit can I expect if I include a huge PHP array? For example, lets say I have a 1GB PHP array in "data.php" that looks like $data = array( //1GB worth of data ) If I include that huge "data.php" file on "header.php", how will it affect the performance of "header.php" when it executes? Thanks!

    Read the article

  • How to customize and reuse a DataGridColumnHeader style?

    - by instcode
    Hi all, I'm trying to customize the column headers of a DataGrid to show sub-column headers as in the following screenshot: I've made a style for 2 sub-column as in the following XAML: <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:data="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data" xmlns:primitives="clr-namespace:System.Windows.Controls.Primitives;assembly=System.Windows.Controls.Data" xmlns:sl="clr-namespace:UI" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" x:Class="UI.ColumnHeaderGrid" mc:Ignorable="d"> <UserControl.Resources> <Style x:Key="SplitColumnHeaderStyle" TargetType="primitives:DataGridColumnHeader"> <Setter Property="Foreground" Value="#FF000000"/> <Setter Property="HorizontalContentAlignment" Value="Center"/> <Setter Property="VerticalContentAlignment" Value="Center"/> <Setter Property="IsTabStop" Value="False"/> <Setter Property="SeparatorBrush" Value="#FFC9CACA"/> <Setter Property="Padding" Value="4"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="primitives:DataGridColumnHeader"> <Grid x:Name="Root"> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition Width="Auto"/> </Grid.ColumnDefinitions> <Rectangle x:Name="BackgroundRectangle" Fill="#FF1F3B53" Stretch="Fill" Grid.ColumnSpan="2"/> <Rectangle x:Name="BackgroundGradient" Stretch="Fill" Grid.ColumnSpan="2"> <Rectangle.Fill> <LinearGradientBrush EndPoint=".7,1" StartPoint=".7,0"> <GradientStop Color="#FCFFFFFF" Offset="0.015"/> <GradientStop Color="#F7FFFFFF" Offset="0.375"/> <GradientStop Color="#E5FFFFFF" Offset="0.6"/> <GradientStop Color="#D1FFFFFF" Offset="1"/> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition Width="1"/> <ColumnDefinition/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <TextBlock Grid.Row="0" Grid.ColumnSpan="3" Text="Headers" TextAlignment="Center"/> <Rectangle Grid.Row="1" Grid.ColumnSpan="3" Fill="{TemplateBinding SeparatorBrush}" Height="1"/> <TextBlock Grid.Row="2" Grid.Column="0" Text="Header 1" TextAlignment="Center"/> <Rectangle Grid.Row="2" Grid.Column="1" Fill="{TemplateBinding SeparatorBrush}" Width="1"/> <TextBlock Grid.Row="2" Grid.Column="2" Text="Header 2" TextAlignment="Center"/> <Path x:Name="SortIcon" Grid.Column="2" Fill="#FF444444" Stretch="Uniform" HorizontalAlignment="Left" Margin="4,0,0,0" VerticalAlignment="Center" Width="8" Opacity="0" RenderTransformOrigin=".5,.5" Data="F1 M -5.215,6.099L 5.215,6.099L 0,0L -5.215,6.099 Z "/> </Grid> <Rectangle x:Name="VerticalSeparator" Fill="{TemplateBinding SeparatorBrush}" VerticalAlignment="Stretch" Width="1" Visibility="{TemplateBinding SeparatorVisibility}" Grid.Column="1"/> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> </UserControl.Resources> <data:DataGrid x:Name="LayoutRoot"> <data:DataGrid.Columns> <data:DataGridTemplateColumn HeaderStyle="{StaticResource SplitColumnHeaderStyle}"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition/> </Grid.ColumnDefinitions> <Border Grid.Column="0" BorderBrush="#FFC9CACA" BorderThickness="0,0,0,0"> <TextBlock Grid.Column="0" Text="{Binding GridData.Column1}"/> </Border> <Border Grid.Column="1" BorderBrush="#FFC9CACA" BorderThickness="1,0,0,0"> <TextBlock Grid.Column="0" Text="{Binding GridData.Column2}"/> </Border> </Grid> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> </data:DataGrid.Columns> </data:DataGrid> Now I want to reuse & extend this style to support 2-6 sub-column headers but I don't know if there is a way to do this, like ContentPresenter "overriding": <Style x:Key="SplitColumnHeaderStyle" TargetType="primitives:DataGridColumnHeader"> <Setter property="Template"> <Setter.Value> ... <ContentPresenter Content="{TemplateBinding Content}".../> ... </Setter.Value> </Setter> </Style> <Style x:Key="TwoSubColumnHeaderStyle" BasedOn="SplitColumnHeaderStyle"> <Setter property="Content"> <Setter.Value> <Grid 2x2.../> </Setter.Value> </Setter> </Style> <Style x:Key="ThreeSubColumnHeaderStyle" BasedOn="SplitColumnHeaderStyle"> <Setter property="Content"> <Setter.Value> <Grid 2x3.../> </Setter.Value> </Setter> </Style> Anyway, please help me on these issues: Given the template above, how to support more sub-column headers without having to create new new new new template for each? Assume that the issue above is solved. How could I attach column names outside the styles? I see that some parts, properties & visualization rules in the XAML are just copies from the original Silverlight component's style, i.e. BackgroundGradient, BackgroundRectangle, VisualStateManager... They must be there in order to support default behaviors or effects but... does anyone know how to remove them, but keep all the default behaviors/effects? Please be specific because I'm just getting start with C# & Silverlight. Thanks.

    Read the article

  • JQuery autocomplete: is not working asp.net

    - by Abu Hamzah
    is that something wrong in the below code? its not firing at all edit <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="HostPage.aspx.cs" Inherits="HostPage" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Untitled Page</title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js"></script> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/jquery-ui.min.js"></script> <link rel="stylesheet" href="http://dev.jquery.com/view/trunk/plugins/autocomplete/jquery.autocomplete.css" type="text/css" /> <script type="text/javascript" src="http://dev.jquery.com/view/trunk/plugins/autocomplete/jquery.autocomplete.js"></script> </head> <body> <form id="form1" runat="server"> <script type="text/javascript"> $(document).ready(function() { $("#<%=txtHost.UniqueID %>").autocomplete("HostService.asmx/GetHosts", { dataType: 'json' , contentType: "application/json; charset=utf-8" , parse: function(data) { var rows = Array(); debugger for (var i = 0; i < data.length; i++) { rows[i] = { data: data[i], value: data[i].LName, result: data[i].LName }; } return rows; } , formatItem: function(row, i, max) { return data.LName + ", " + data.FName; } }); }); </script> <div> <asp:Label runat="server" ID='Label4' >Host Name:</asp:Label> <asp:TextBox ID="txtHost" runat='server'></asp:TextBox> <p> </div> </form> </body> </html>

    Read the article

  • Managing the interval for horizontal axis in flex

    - by Roshan
    Hi Guys, How can we manage the horizontalaxis interval in flex chart? What actually happening is , the data is inserted between two interval levels and its causing readability problem when we draw line grids in graph. The data point is shown in between the data grids. How can we move the axis or manage the data points?

    Read the article

  • Variable sized packet structs with vectors

    - by Rev316
    Lately I've been diving into network programming, and I'm having some difficulty constructing a packet with a variable "data" property. Several prior questions have helped tremendously, but I'm still lacking some implementation details. I'm trying to avoid using variable sized arrays, and just use a vector. But I can't get it to be transmitted correctly, and I believe it's somewhere during serialization. Now for some code. Packet Header class Packet { public: void* Serialize(); bool Deserialize(void *message); unsigned int sender_id; unsigned int sequence_number; std::vector<char> data; }; Packet ImpL typedef struct { unsigned int sender_id; unsigned int sequence_number; std::vector<char> data; } Packet; void* Packet::Serialize(int size) { Packet* p = (Packet *) malloc(8 + 30); p->sender_id = htonl(this->sender_id); p->sequence_number = htonl(this->sequence_number); p->data.assign(size,'&'); //just for testing purposes } bool Packet::Deserialize(void *message) { Packet *s = (Packet*)message; this->sender_id = ntohl(s->sender_id); this->sequence_number = ntohl(s->sequence_number); this->data = s->data; } During execution, I simply create a packet, assign it's members, and send/receive accordingly. The above methods are only responsible for serialization. Unfortunately, the data never gets transferred. Couple of things to point out here. I'm guessing the malloc is wrong, but I'm not sure how else to compute it (i.e. what other value it would be). Other than that, I'm unsure of the proper way to use a vector in this fashion, and would love for someone to show me how (code examples please!) :) Edit: I've awarded the question to the most comprehensive answer regarding the implementation with a vector data property. Appreciate all the responses!

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • Can a View be returned as a JSON object in ASP.Net MVC

    - by Chev
    I want to know if it is possibe to return a view as a JSON object. In my controller I want to do something like the following: [AcceptVerbs("Post")] public JsonResult SomeActionMethod() { return new JsonResult { Data = new { success = true, view = PartialView("MyPartialView") } }; } In html: $.post($(this).attr('action'), $(this).serialize(), function(Data) { alert(Data.success); $("#test").replaceWith(Data.view); }); Any feedback greatly appreciated.

    Read the article

  • derby + hibernate ConstraintViolationException using manytomany relationships

    - by user364470
    Hi, I'm new to Hibernate+Derby... I've seen this issue mentioned throughout the google, but have not seen a proper resolution. This following code works fine with mysql, but when I try this on derby i get exceptions: ( each Tag has two sets of files and vise-versa - manytomany) Tags.java @Entity @Table(name="TAGS") public class Tags implements Serializable { @Id @GeneratedValue(strategy=GenerationType.AUTO) public long getId() { return id; } @ManyToMany(targetEntity=Files.class ) @ForeignKey(name="USER_TAGS_FILES",inverseName="USER_FILES_TAGS") @JoinTable(name="USERTAGS_FILES", joinColumns=@JoinColumn(name="TAGS_ID"), inverseJoinColumns=@JoinColumn(name="FILES_ID")) public Set<data.Files> getUserFiles() { return userFiles; } @ManyToMany(mappedBy="autoTags", targetEntity=data.Files.class) public Set<data.Files> getAutoFiles() { return autoFiles; } Files.java @Entity @Table(name="FILES") public class Files implements Serializable { @Id @GeneratedValue(strategy=GenerationType.AUTO) public long getId() { return id; } @ManyToMany(mappedBy="userFiles", targetEntity=data.Tags.class) public Set getUserTags() { return userTags; } @ManyToMany(targetEntity=Tags.class ) @ForeignKey(name="AUTO_FILES_TAGS",inverseName="AUTO_TAGS_FILES") @JoinTable(name="AUTOTAGS_FILES", joinColumns=@JoinColumn(name="FILES_ID"), inverseJoinColumns=@JoinColumn(name="TAGS_ID")) public Set getAutoTags() { return autoTags; } I add some data to the DB, but when running over Derby these exception turn up (the don't using mysql) Exceptions SEVERE: DELETE on table 'FILES' caused a violation of foreign key constraint 'USER_FILES_TAGS' for key (3). The statement has been rolled back. Jun 10, 2010 9:49:52 AM org.hibernate.event.def.AbstractFlushingEventListener performExecutions SEVERE: Could not synchronize database state with session org.hibernate.exception.ConstraintViolationException: could not delete: [data.Files#3] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:96) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:2712) at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:2895) at org.hibernate.action.EntityDeleteAction.execute(EntityDeleteAction.java:97) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:268) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:260) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:184) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1206) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:613) at org.hibernate.context.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:344) at $Proxy13.flush(Unknown Source) at data.HibernateORM.removeFile(HibernateORM.java:285) at data.DataImp.removeFile(DataImp.java:195) at booting.DemoBootForTestUntilTestClassesExist.main(DemoBootForTestUntilTestClassesExist.java:62) I have never used derby before so maybe there is something crutal that i'm missing 1) what am I doing wrong? 2) is there any way of cascading properly when I have 2 many-to-many relationships between two classes? Thanks!

    Read the article

  • Cloud security and privacy

    - by Rakesh K
    Hi, I have a very basic doubt regarding cloud computing that is catching up pretty fast these days. To my understanding, cloud computing is a paradigm in which companies put up their data and applications on somebody else's machines aka 'The Cloud'. I want to know just how secure is it to put up my data on some third party machines, especially if my data contains private details. In particular, how can an enterprise trust the cloud computing service providers in this data privacy aspect? Thanks, rakesh.

    Read the article

  • Double Buffering for Game objects, what's a nice clean generic C++ way?

    - by Gary
    This is in C++. So, I'm starting from scratch writing a game engine for fun and learning from the ground up. One of the ideas I want to implement is to have game object state (a struct) be double-buffered. For instance, I can have subsystems updating the new game object data while a render thread is rendering from the old data by guaranteeing there is a consistent state stored within the game object (the data from last time). After rendering of old and updating of new is finished, I can swap buffers and do it again. Question is, what's a good forward-looking and generic OOP way to expose this to my classes while trying to hide implementation details as much as possible? Would like to know your thoughts and considerations. I was thinking operator overloading could be used, but how do I overload assign for a templated class's member within my buffer class? for instance, I think this is an example of what I want: doublebuffer<Vector3> data; data.x=5; //would write to the member x within the new buffer int a=data.x; //would read from the old buffer's x member data.x+=1; //I guess this shouldn't be allowed If this is possible, I could choose to enable or disable double-buffering structs without changing much code. This is what I was considering: template <class T> class doublebuffer{ T T1; T T2; T * current=T1; T * old=T2; public: doublebuffer(); ~doublebuffer(); void swap(); operator=()?... }; and a game object would be like this: struct MyObjectData{ int x; float afloat; } class MyObject: public Node { doublebuffer<MyObjectData> data; functions... } What I have right now is functions that return pointers to the old and new buffer, and I guess any classes that use them have to be aware of this. Is there a better way?

    Read the article

  • Partition table corrupted (USB flash drive)

    - by 13ren
    It's an 8 GB Patriot thumb drive, which I've used extensively with lots of data. Today, it is detected, but all data is gone: (EDIT at least some data is still there, but the partition table is gone) EDIT @Sathya (thanks) here's the relevant output from sudo fdisk -l: Disk /dev/sdc: 8019 MB, 8019509248 bytes 247 heads, 62 sectors/track, 1022 cylinders Units = cylinders of 15314 * 512 = 7840768 bytes Disk /dev/sdc doesn't contain a valid partition table It looks like it is /dev/sdc, with that 8 GB... and no partition table. I tried to mount /dev/sdc (and then dmesg | tail): /media> sudo mount /dev/sdc mytmp mount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so /media> dmesg | tail [ 24.300000] sdc: unknown partition table [ 24.320000] sd 2:0:0:0: Attached scsi removable disk sdc [ 24.370000] usb-storage: device scan complete [ 26.870000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 26.870000] EXT2-fs: group descriptors corrupted! [ 50.420000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 5565.470000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 5565.470000] EXT2-fs: group descriptors corrupted! EDIT @Col: results from testdisk Disk /dev/sdc - 8013 MB / 7642 MiB - CHS 1022 247 62 Current partition structure: Partition Start End Size in sectors Partition sector doesn't have the endmark 0xAA55 After I hit [proceed], it says: Structure: Ok. Keys A: add partition, L: load backup, Enter: to continue The "Structure: Ok." seems reassuring... will "A: add partition" make my old data accessible (if it's still there), or will it make a new, fresh partition? Another option is "[ MBR Code ] Write TestDisk MBR code to first sector" - would it be better to do this? EDIT I found that at least some of my data is still on the flash drive, by using the below, and searching for English text in less (like " the "): cat /dev/sde | tr -cd '\11\12\40\1540-\176' | less (The drive changed from "/dev/sdb" to "/dev/sde" because I connected some extra drives today). I've learnt that "/dev/sde1" would be the first partition; and "/dev/sde" is the whole drive. Because unix treats these devices just like files, you can use all the ordinary unix file commands on them, like cat, and then process them like any other stream of data. The tr above removes non-printable characters ("\40" is space, which I wanted to preserve). In less, you can use "/" to search, similar to Vim. How can I get my data back (assuming it's still there)? If only the partition table is corrupted, is there a standard "partition recovery tool"? Is there a way to "repartition" without deleting everything?

    Read the article

  • Msdtc Transaction

    - by Shimjith
    I am using Linked server For Tansaction example Alter Proc [dbo].[usp_Select_TransferingDatasFromServerCheckingforExample] @RserverName varchar(100), ----- Server Name @RUserid Varchar(100), ----- server user id @RPass Varchar(100), ----- Server Password @DbName varchar(100) ----- Server database As Set nocount on Set Xact_abort on Declare @user varchar(100) Declare @userID varchar(100) Declare @Db Varchar(100) Declare @Lserver varchar(100) Select @Lserver = @@servername Select @userID = suser_name() select @User=user Exec('if exists(Select 1 From [Master].[' + @user + '].[sysservers] where srvname = ''' + @RserverName + ''') begin Exec sp_droplinkedsrvlogin ''' + @RserverName + ''',''' + @userID + ''' exec sp_dropserver ''' + @RserverName + ''' end ') set @RserverName='['+@RserverName+']' BEGIN TRY BEGIN TRANSACTION declare @ColumnList varchar(max) set @ColumnList = null select @ColumnList = case when @ColumnList is not null then @ColumnList + ',' + quotename(name) else quotename(name) end from syscolumns where id = object_id('bditm') order by colid set identity_insert Bditm on exec ('Insert Into Bditm ('+ @ColumnList +') Select * From '+ @RserverName + '.'+ @DbName + '.'+ @user + '.Bditm') set identity_insert Bditm off Commit Select 1 End try Begin catch if (@@ERROR < 0) Begin if @@trancount 0 Begin Rollback transaction Select 0 END End End Catch set @RserverName=replace(replace(@RserverName,'[',''),']','') Exec sp_droplinkedsrvlogin @RserverName,@userID Exec sp_dropserver @RserverName this is the Error Occuerd The Microsoft Distributed Transaction Coordinator (MS DTC) has cancelled the distributed transaction.

    Read the article

  • Still confuse parse JSON in GWT

    - by graybow
    Please help meee. I create a project named 'tesdb3' in eclipse. I create the PHP side to access the database, and made the output as JSON.. I create the userdata.php in folder war. then I compile tesdb3 project. Folder tesdb3 and the userdata.php in war moved in local server(I use WAMP). I put the PHP in folder tesdb3. This is the result from my localhost/phpmyadmin/tesdb3/userdata.php [{"kode":"002","nama":"bambang gentolet"},{"kode":"012","nama":"Algiz"}] From that result I think the PHP side was working good.Then I create UserData.java as JSNI overlay like this: package com.tesdb3.client; import com.google.gwt.core.client.JavaScriptObject; class UserData extends JavaScriptObject{ protected UserData() {} public final native String getKode() /*-{ return this.kode; }-*/; public final native String getNama() /*-{ return this.nama; }-*/; public final String getFullData() { return getKode() + ":" + getNama(); } } Then Finally in the tesdb3.java: public class Tesdb3 implements EntryPoint { String url= "http://localhost/phpmyadmin/tesdb3/datauser.php"; private native JsArray<UserData> getuserdata(String json) /*-{ return eval(json); }-*/; public void LoadData() throws RequestException{ RequestBuilder builder = new RequestBuilder(RequestBuilder.GET, URL.encode(url)); builder.sendRequest(null, new RequestCallback(){ @Override public void onError(Request request, Throwable exception) { Window.alert("error " + exception); } public void onResponseReceived(Request request, Response response) { Window.alert("betul" + response.getText()); //data(getuserdata(response.getText())); } }); } public void data(JsArray<UserData> data){ for (int i = 0; i < data.length(); i++) { String lkode =data.get(i).getKode(); String lname =data.get(i).getNama(); Label l = new Label(lkode+" "+lname); tb.setWidget(i, 0, l); } RootPanel.get().add(new HTML("my data")); RootPanel.get().add(tb); } public void onModuleLoad() { try { LoadData(); } catch (RequestException e) { } } } The result just showing string "my data". And the Window.alert(response.getText()) showing nothing. Whyy?

    Read the article

  • Image rescale and write rescaled image file in blackberry

    - by Karthick
    I am using the following code to resize and save the file in to the blackberry device. After image scale I try to write image file into device. But it gives the same data. (Height and width of the image are same).I have to make rescaled image file.Can anyone help me ??? class ResizeImage extends MainScreen implements FieldChangeListener { private String path="file:///SDCard/BlackBerry/pictures/test.jpg"; private ButtonField btn; ResizeImage() { btn=new ButtonField("Write File"); btn.setChangeListener(this); add(btn); } public void fieldChanged(Field field, int context) { if (field == btn) { try { InputStream inputStream = null; //Get File Connection FileConnection fileConnection = (FileConnection) Connector.open(path); if (fileConnection.exists()) { inputStream = fileConnection.openInputStream(); //byte data[]=inputStream.toString().getBytes(); ByteArrayOutputStream baos = new ByteArrayOutputStream(); int j = 0; while((j=inputStream.read()) != -1) { baos.write(j); } byte data[] = baos.toByteArray(); inputStream.close(); fileConnection.close(); WriteFile("file:///SDCard/BlackBerry/pictures/org_Image.jpg",data); EncodedImage eImage = EncodedImage.createEncodedImage(data,0,data.length); int scaleFactorX = Fixed32.div(Fixed32.toFP(eImage.getWidth()), Fixed32.toFP(80)); int scaleFactorY = Fixed32.div(Fixed32.toFP(eImage.getHeight()), Fixed32.toFP(80)); eImage=eImage.scaleImage32(scaleFactorX, scaleFactorY); WriteFile("file:///SDCard/BlackBerry/pictures/resize.jpg",eImage.getData()); BitmapField bit=new BitmapField(eImage.getBitmap()); add(bit); } } catch(Exception e) { System.out.println("Exception is ==> "+e.getMessage()); } } } void WriteFile(String fileName,byte[] data) { FileConnection fconn = null; try { fconn = (FileConnection) Connector.open(fileName,Connector.READ_WRITE); } catch (IOException e) { System.out.print("Error opening file"); } if (fconn.exists()) try { fconn.delete(); } catch (IOException e) { System.out.print("Error deleting file"); } try { fconn.create(); } catch (IOException e) { System.out.print("Error creating file"); } OutputStream out = null; try { out = fconn.openOutputStream(); } catch (IOException e) { System.out.print("Error opening output stream"); } try { out.write(data); } catch (IOException e) { System.out.print("Error writing to output stream"); } try { fconn.close(); } catch (IOException e) { System.out.print("Error closing file"); } } }

    Read the article

  • Dynamically Forming a JSON object traversion without using eval.

    - by Matt Willhite
    Given I have the following: (which is dynamically generated and varies in length) associations = ["employer", "address"]; Trying to traverse the JSON object, and wanting to form something like the following: data.employer.address or data[associations[0]][association[1]] Without doing this: eval("data."+associations.join('.')); Finally, I may be shunned for saying this, but is it okay to use eval in an instance like this? Just retrieving data.

    Read the article

  • how much time does grid.py take to run ?

    - by trinity
    Hello all , I am using libsvm for binary classification.. I wanted to try grid.py , as it is said to improve results.. I ran this script for five files in separate terminals , and the script has been running for more than 12 hours.. this is the state of my 5 terminals now : [root@localhost tools]# python grid.py sarts_nonarts_feat.txt>grid_arts.txt Warning: empty z range [61.3997:61.3997], adjusting to [60.7857:62.0137] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [61.3997:61.3997], adjusting to [60.7857:62.0137] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sgames_nongames_feat.txt>grid_games.txt Warning: empty z range [64.5867:64.5867], adjusting to [63.9408:65.2326] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [64.5867:64.5867], adjusting to [63.9408:65.2326] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sref_nonref_feat.txt>grid_ref.txt Warning: empty z range [62.4602:62.4602], adjusting to [61.8356:63.0848] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [62.4602:62.4602], adjusting to [61.8356:63.0848] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sbiz_nonbiz_feat.txt>grid_biz.txt Warning: empty z range [67.9762:67.9762], adjusting to [67.2964:68.656] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [67.9762:67.9762], adjusting to [67.2964:68.656] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py snews_nonnews_feat.txt>grid_news.txt Wrong input format at line 494 Traceback (most recent call last): File "grid.py", line 223, in run if rate is None: raise "get no rate" TypeError: exceptions must be classes or instances, not str I had redirected the outputs to files , but those files for now contain nothing.. And , the following files were created : sbiz_nonbiz_feat.txt.out sbiz_nonbiz_feat.txt.png sarts_nonarts_feat.txt.out sarts_nonarts_feat.txt.png sgames_nongames_feat.txt.out sgames_nongames_feat.txt.png sref_nonref_feat.txt.out sref_nonref_feat.txt.png snews_nonnews_feat.txt.out (-- is empty ) There's just one line of information in .out files.. the ".png" files are some GNU PLOTS . But i dont understand what the above GNUplots / warnings convey .. Should i re-run them ? Can anyone please tell me on how much time this script might take if each input file contains about 144000 lines.. Thanks and regards

    Read the article

< Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >