Search Results

Search found 88829 results on 3554 pages for 'new office'.

Page 344/3554 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • Thumbnail image saved with worse quality on Windows Server 2003

    - by Angelo
    Hello, In asp.net 2.0 application I am trying to create thumbnails from uploaded images. However when I test the application on my PC under Windows7 it works fine, but on the real Windows 2003 Server the resized image has worse quality. Where this difference could come from? Different JPEG codec or what, if Yes how it can be updated on Win 2003 Server? Thanks! Here is the code: Resize of the Image: Bitmap newBmp = new Bitmap(imgWidth, imgHeight, PixelFormat.Format24bppRgb); newBmp.SetResolution(inputBmp.HorizontalResolution, inputBmp.VerticalResolution); //Create a graphics object attached to the new bitmap Graphics newBmpGraphics = Graphics.FromImage(newBmp); newBmpGraphics.InterpolationMode = InterpolationMode.HighQualityBicubic; newBmpGraphics.SmoothingMode = SmoothingMode.HighQuality; newBmpGraphics.PixelOffsetMode = PixelOffsetMode.HighQuality; newBmpGraphics.DrawImage(inputBmp, new Rectangle(0, 0, imgWidth, imgHeight), new Rectangle(0, 0, inputBmp.Width, inputBmp.Height), GraphicsUnit.Pixel); Save of the Image: System.IO.Stream imgStream = new System.IO.MemoryStream(); //Get the ImageCodecInfo for the desired target format ImageCodecInfo destCodec = FindCodecForType(ImageMimeTypes.JPEG); if (destCodec == null) { //No codec available for that format throw new ArgumentException("The requested format image/jpeg does not have an available codec installed", "destFormat"); } //Create an EncoderParameters collection to contain the //parameters that control the dest format's encoder EncoderParameters destEncParams = new EncoderParameters(1); EncoderParameter qualityParam = new EncoderParameter(System.Drawing.Imaging.Encoder.Quality,(long)quality); destEncParams.Param[0] = qualityParam; //Save w/ the selected codec and encoder parameters inputBmp.Save(imgStream, destCodec, destEncParams); Bitmap destBitmap = new Bitmap(imgStream);

    Read the article

  • Issue with transparent texture on 3D primitive, XNA 4.0

    - by Bevin
    I need to draw a large set of cubes, all with (possibly) unique textures on each side. Some of the textures also have parts of transparency. The cubes that are behind ones with transparent textures should show through the transparent texture. However, it seems that the order in which I draw the cubes decides if the transparency works or not, which is something I want to avoid. Look here: cubeEffect.CurrentTechnique = cubeEffect.Techniques["Textured"]; Block[] cubes = new Block[4]; cubes[0] = new Block(BlockType.leaves, new Vector3(0, 0, 3)); cubes[1] = new Block(BlockType.dirt, new Vector3(0, 1, 3)); cubes[2] = new Block(BlockType.log, new Vector3(0, 0, 4)); cubes[3] = new Block(BlockType.gold, new Vector3(0, 1, 4)); foreach(Block b in cubes) { b.shape.RenderShape(GraphicsDevice, cubeEffect); } This is the code in the Draw method. It produces this result: As you can see, the textures behind the leaf cube are not visible on the other side. When i reverse index 3 and 0 on in the array, I get this: It is clear that the order of drawing is affecting the cubes. I suspect it may have to do with the blend mode, but I have no idea where to start with that.

    Read the article

  • ASP.NET MVC4: How do I keep from having multiple identical results in my lookup tables

    - by sehummel
    I'm new to ASP.NET so this may be an easy question. I'm seeding my database with several rows of dummy data. Here is one of my rows: new Software { Title = "Microsoft Office", Version = "2012", SerialNumber = "12346231434543", Platform = "PC", Notes = "Macs really rock!", PurchaseDate = "2011-12-04", Suite = true, SubscriptionEndDate = null, SeatCount = 0, SoftwareTypes = new List<SoftwareType> { new SoftwareType { Type="Suite" }}, Locations = new List<Location> { new Location { LocationName = "Paradise" }}, Publishers = new List<SoftwarePublisher> { new SoftwarePublisher { Publisher = "Microsoft" }}} But when I do this, a new row is created for each location, with the LocationName being set in each row like this. We only have two locations. How do I get it to create a LocationID property for the Software class and in my Locations class. Here is my Location class: public class Location { public int Id { get; set; } [Required] [StringLength(20)] public string LocationName { get; set; } public virtual Software Software { get; set; } } I have this line in my Software class to reference this table: public virtual List<Location> Locations { get; set; } Again, what I want when I am done is a Locations table with two entries, and a LocationID field in my Software table. How do I do this?

    Read the article

  • Immutability and shared references - how to reconcile?

    - by davetron5000
    Consider this simplified application domain: Criminal Investigative database Person is anyone involved in an investigation Report is a bit of info that is part of an investigation A Report references a primary Person (the subject of an investigation) A Report has accomplices who are secondarily related (and could certainly be primary in other investigations or reports These classes have ids that are used to store them in a database, since their info can change over time (e.g. we might find new aliases for a person, or add persons of interest to a report) If these are stored in some sort of database and I wish to use immutable objects, there seems to be an issue regarding state and referencing. Supposing that I change some meta-data about a Person. Since my Person objects immutable, I might have some code like: class Person( val id:UUID, val aliases:List[String], val reports:List[Report]) { def addAlias(name:String) = new Person(id,name :: aliases,reports) } So that my Person with a new alias becomes a new object, also immutable. If a Report refers to that person, but the alias was changed elsewhere in the system, my Report now refers to the "old" person, i.e. the person without the new alias. Similarly, I might have: class Report(val id:UUID, val content:String) { /** Adding more info to our report */ def updateContent(newContent:String) = new Report(id,newContent) } Since these objects don't know who refers to them, it's not clear to me how to let all the "referrers" know that there is a new object available representing the most recent state. This could be done by having all objects "refresh" from a central data store and all operations that create new, updated, objects store to the central data store, but this feels like a cheesy reimplementation of the underlying language's referencing. i.e. it would be more clear to just make these "secondary storable objects" mutable. So, if I add an alias to a Person, all referrers see the new value without doing anything. How is this dealt with when we want to avoid mutability, or is this a case where immutability is not helpful?

    Read the article

  • How to deal with a Java serialized object whose package changed?

    - by Alex
    I have a Java class that is stored in an HttpSession object that's serialized and transfered between servers in a cluster environment. For the purpose of this explanation, lets call this class "Person". While in the process of improving the code, this class was moved from "com.acme.Person" to "com.acme.entity.Person". Internally, the class remains exactly the same (same fields, same methods, same everything). The problem is that we have two sets of servers running the old code and the new code at the same time. The servers with the old code have serialized HttpSession object and when the new code unserializes it, it throws a ClassNotFoundException because it can't find the old reference to com.acme.Person. At this point, it's easy to deal with this because we can just recreate the object using the new package. The problem then becomes that the HttpSession in the new servers, will serialize the object with the new reference to com.acme.entity.Person, and when this is unserialized in the servers running the old code, another exception will be thrown. At this point, we can't deal with this exception anymore. What's the best strategy to follow for this kind of cases? Is there a way to tell the new servers to serialize the object with the reference to the old package and unserialize references to the old package to the new one? How would we transition to using the new package and forgetting about the old one once all servers run the new code?

    Read the article

  • Make datalist into 3 x 3

    - by unknown
    This is my code behind for products page. The prodblem is that my datalist will fill in new items whenever I add a new object in the website. So may I know how to add the codes in so that I can make the datalist into 3x3. Thanks. Code behind Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim Product As New Product Dim DataSet As New DataSet Dim pds As New PagedDataSource DataSet = Product.GetProduct Session("Dataset") = DataSet pds.PageSize = 4 pds.AllowPaging = True pds.CurrentPageIndex = 1 Session("Page") = pds If Not Page.IsPostBack Then UpdateDatabind() End If End Sub Sub UpdateDatabind() Dim DataSet As DataSet = Session("DataSet") If DataSet.Tables(0).Rows.Count > 0 Then pds.DataSource = DataSet.Tables(0).DefaultView Session("Page") = pds dlProducts.DataSource = DataSet.Tables(0).DefaultView dlProducts.DataBind() lblCount.Text = DataSet.Tables(0).Rows.Count End If End Sub Private Sub dlProducts_UpdateCommand(source As Object, e As System.Web.UI.WebControls.DataListCommandEventArgs) Handles dlProducts.UpdateCommand dlProducts.DataBind() End Sub Public Sub PrevNext_Command(source As Object, e As CommandEventArgs) Dim pds As PagedDataSource = Session("Page") Dim CurrentPage = pds.CurrentPageIndex If e.CommandName = "Previous" Then If CurrentPage < 1 Or CurrentPage = pds.IsFirstPage Then CurrentPage = 1 Else CurrentPage -= 1 End If UpdateDatabind() ElseIf e.CommandName = "Next" Then If CurrentPage > pds.PageCount Or CurrentPage = pds.IsLastPage Then CurrentPage = CurrentPage Else CurrentPage += 1 End If UpdateDatabind() End If Response.Redirect("Products.aspx?PageIndex=" & CurrentPage) End Sub this is my code for product.vb Public Function GetProduct() As DataSet Dim strConn As String strConn = ConfigurationManager.ConnectionStrings("HomeFurnitureConnectionString").ToString Dim conn As New SqlConnection(strConn) Dim strSql As String 'strSql = "SELECT * FROM Product p INNER JOIN ProductDetail pd ON p.ProdID = pd.ProdID " & _ ' "WHERE pd.PdColor = (SELECT min(PdColor) FROM ProductDetail as pd1 WHERE pd1.ProdID = p.ProdID)" Dim cmd As New SqlCommand(strSql, conn) Dim ds As New DataSet Dim da As New SqlDataAdapter(cmd) conn.Open() da.Fill(ds) conn.Close() Return ds End Function

    Read the article

  • Query Parameter Value Is Null When Enum Item 0 is Cast with Int32

    - by Timothy
    When I use the first item in a zero-based Enum cast to Int32 as a query parameter, the parameter value is null. I've worked around it by simply setting the first item to a value of 1, but I was wondering though what's really going on here? This one has me scratching my head. Why is the parameter value regarded as null, instead of 0? Enum LogEventType : int { SignIn, SignInFailure, SignOut, ... } private static DataTable QueryEventLogSession(DateTime start, DateTime stop) { DataTable entries = new DataTable(); using (FbConnection conn = new FbConnection(DSN)) { using (FbDataAdapter adapter = new FbDataAdapter( "SELECT event_type, event_timestamp, event_details FROM event_log " + "WHERE event_timestamp BETWEEN @start AND @stop " + "AND event_type IN (@signIn, @signInFailure, @signOut) " + "ORDER BY event_timestamp ASC", conn)) { adapter.SelectCommand.Parameters.AddRange(new Object[] { new FbParameter("@start", start), new FbParameter("@stop", stop), new FbParameter("@signIn", (Int32)LogEventType.SignIn), new FbParameter("@signInFailure", (Int32)LogEventType.SignInFailure), new FbParameter("@signOut", (Int32)LogEventType.SignOut)}); Trace.WriteLine(adapter.SelectCommand.CommandText); foreach (FbParameter p in adapter.SelectCommand.Parameters) { Trace.WriteLine(p.Value.ToString()); } adapter.Fill(entries); } } return entries; }

    Read the article

  • Help me convert .NET 1.1 Xml validation code to .NET 2.0 please.

    - by Hamish Grubijan
    It would be fantastic if you could help me rid of these warnings below. I have not been able to find a good document. Since the warnings are concentrated in just the private void ValidateConfiguration( XmlNode section ) section, hopefully this is not terribly hard to answer, if you have encountered this before. Thanks! 'System.Configuration.ConfigurationException.ConfigurationException(string)' is obsolete: 'This class is obsolete, to create a new exception create a System.Configuration!System.Configuration.ConfigurationErrorsException' 'System.Xml.XmlValidatingReader' is obsolete: 'Use XmlReader created by XmlReader.Create() method using appropriate XmlReaderSettings instead. http://go.microsoft.com/fwlink/?linkid=14202' private void ValidateConfiguration( XmlNode section ) { // throw if there is no configuration node. if( null == section ) { throw new ConfigurationException("The configuration section passed within the ... class was null ... there must be a configuration file defined.", section ); } //Validate the document using a schema XmlValidatingReader vreader = new XmlValidatingReader( new XmlTextReader( new StringReader( section.OuterXml ) ) ); // open stream on Resources; the XSD is set as an "embedded resource" so Resource can open a stream on it using (Stream xsdFile = XYZ.GetStream("ABC.xsd")) using (StreamReader sr = new StreamReader(xsdFile)) { vreader.ValidationEventHandler += new ValidationEventHandler(ValidationCallBack); vreader.Schemas.Add(XmlSchema.Read(new XmlTextReader(sr), null)); vreader.ValidationType = ValidationType.Schema; // Validate the document while (vreader.Read()) { } if (!_isValidDocument) { _schemaErrors = _sb.ToString(); throw new ConfigurationException("XML Document not valid"); } } } // Does not cause warnings. private void ValidationCallBack( object sender, ValidationEventArgs args ) { // check what KIND of problem the schema validation reader has; // on FX 1.0, it gives a warning for "<xs:any...skip" sections. Don't worry about those, only set validation false // for real errors if( args.Severity == XmlSeverityType.Error ) { _isValidDocument = false; _sb.Append( args.Message + Environment.NewLine ); } }

    Read the article

  • VB.NET Update Access Database with DataTable

    - by sinDizzy
    I've been perusing some hep forums and some help books but cant seem to get my head wrapped around this. My task is to read data from two text files and then load that data into an existing MS Access 2007 database. So here is what i'm trying to do: Read data from first text file and for every line of data add data to a DataTable using CarID as my unique field. Read data from second text file and look for existing CarID in DataTable if exists update that row. If it doesnt exist add a new row. once im done push the contents of the DataTable to the database. What i have so far: Dim sSQL As String = "SELECT * FROM tblCars" Dim da As New OleDb.OleDbDataAdapter(sSQL, conn) Dim ds As New DataSet da.Fill(ds, "CarData") Dim cb As New OleDb.OleDbCommandBuilder(da) 'loop read a line of text and parse it out. gets dd, dc, and carId 'create a new empty row Dim dsNewRow As DataRow = ds.Tables("CarData").NewRow() 'update the new row with fresh data dsNewRow.Item("DriveDate") = dd dsNewRow.Item("DCode") = dc dsNewRow.Item("CarNum") = carID 'about 15 more fields 'add the filled row to the DataSet table ds.Tables("CarData").Rows.Add(dsNewRow) 'end loop 'update the database with the new rows da.Update(ds, "CarData") Questions: In constructing my table i use "SELECT * FROM tblCars" but what if that table has millions of records already. Is that not a waste of resources? Should i be trying something different if i want to update with new records? Once Im done with the first text file i then go to my next text file. Whats the best approach here: To First look for an existing record based on CarNum or to create a second table and then merge the two at the end? Finally when the DataTable is done being populated and im pushing it to the database i want to make sure that if records already exist with three primary fields (DriveDate, DCode, and CarNum) that they get updated with new fields and if it doesn't exist then those records get appended. Is that possible with my process? tia AGP

    Read the article

  • LINQ: display results from empty lists

    - by Douglas H. M.
    I've created two entities (simplified) in C#: class Log { entries = new List<Entry>(); DateTime Date { get; set; } IList<Entry> entries { get; set; } } class Entry { DateTime ClockIn { get; set; } DateTime ClockOut { get; set; } } I am using the following code to initialize the objects: Log log1 = new Log() { Date = new DateTime(2010, 1, 1), }; log1.Entries.Add(new Entry() { ClockIn = new DateTime(0001, 1, 1, 9, 0, 0), ClockOut = new DateTime(0001, 1, 1, 12, 0, 0) }); Log log2 = new Log() { Date = new DateTime(2010, 2, 1), }; The method below is used to get the date logs: var query = from l in DB.GetLogs() from e in l.Entries orderby l.Date ascending select new { Date = l.Date, ClockIn = e.ClockIn, ClockOut = e.ClockOut, }; The result of the above LINQ query is: /* Date | Clock In | Clock Out 01/01/2010 | 09:00 | 12:00 */ My question is, what is the best way to rewrite the LINQ query above to include the results from the second object I created (Log2), since it has an empty list. In the other words, I would like to display all dates even if they don't have time values. The expected result would be: /* Date | Clock In | Clock Out 01/01/2010 | 09:00 | 12:00 02/01/2010 | | */

    Read the article

  • Query Parameter Value Is Null When Enum Item 0 is Cast to Int32

    - by Timothy
    When I use the first item in a zero-based Enum cast to Int32 as a query parameter, the parameter value is null. I've worked around it by simply setting the first item to a value of 1, but I was wondering though what's really going on here? This one has me scratching my head. Why does the parameter regarded the value as null, instead of 0? Enum LogEventType : int { SignIn, SignInFailure, SignOut, ... } private static DataTable QueryEventLogSession(DateTime start, DateTime stop) { DataTable entries = new DataTable(); using (FbConnection conn = new FbConnection(DSN)) { using (FbDataAdapter adapter = new FbDataAdapter( "SELECT event_type, event_timestamp, event_details FROM event_log " + "WHERE event_timestamp BETWEEN @start AND @stop " + "AND event_type IN (@signIn, @signInFailure, @signOut) " + "ORDER BY event_timestamp ASC", conn)) { adapter.SelectCommand.Parameters.AddRange(new Object[] { new FbParameter("@start", start), new FbParameter("@stop", stop), new FbParameter("@signIn", (Int32)LogEventType.SignIn), new FbParameter("@signInFailure", (Int32)LogEventType.SignInFailure), new FbParameter("@signOut", (Int32)LogEventType.SignOut)}); Trace.WriteLine(adapter.SelectCommand.CommandText); foreach (FbParameter p in adapter.SelectCommand.Parameters) { Trace.WriteLine(p.Value.ToString()); } adapter.Fill(entries); } } return entries; }

    Read the article

  • Method not found: 'Void Google.Apis.Util.Store.FileDataStore..ctor(System.String)'

    - by user3732193
    I've been stuck at this for days now. I copied the exact codes from google api samples to upload files to Google Drive. Here is the code UserCredential credential = GoogleWebAuthorizationBroker.AuthorizeAsync( new ClientSecrets { ClientId = ClientId, ClientSecret = ClientSecret, }, new[] { DriveService.Scope.Drive, DriveService.Scope.DriveFile }, "user", CancellationToken.None, new FileDataStore("MyStore")).Result; But it would throw an exception at runtime: Method not found: 'Void Google.Apis.Util.Store.FileDataStore..ctor(System.String)'. I already added the necessary Google Api dlls. Or if anyone could suggest a better code for uploading files to Google Drive in a website which implements Server-Side Authorization. Any help would be greatly appreciated. UPDATE: I changed my code to this var token = new TokenResponse { RefreshToken = "1/6hnki1x0xOMU4tr5YXNsLgutzbTcRK1M-QOTEuRVxL4" }; var credentials = new UserCredential(new GoogleAuthorizationCodeFlow(new GoogleAuthorizationCodeFlow.Initializer { ClientSecrets = new ClientSecrets { ClientId = ClientId, ClientSecret = ClientSecret }, Scopes = new[] { DriveService.Scope.Drive, DriveService.Scope.DriveFile } }), "user", token); But it also throws an exception: Method not found: 'Void Google.Apis.Auth.OAuth2.Flows.GoogleAuthorizationCodeFlow..ctor(Initializer). Is the problem with the dlls?

    Read the article

  • Choropleth mapping issue in R

    - by chasec
    I am trying to follow the tutorial described here: http://www.thisisthegreenroom.com/2009/choropleths-in-r/ The below code executes, but it is either not matching my dataset with the maps_counties data properly, or it isn't plotting it in the order I would expect. For example, the resulting areas for the greater NYC area show no density while random counties in PA show the highest density. The general format of my data table is: county state count fairfield connecticut 17 hartford connecticut 6 litchfield connecticut 3 new haven connecticut 12 ... ... westchester new york 70 yates new york 1 luzerne pennsylvania 1 Note this data is in order by state and then county and includes data for CT, NJ, NY, & PA. First, I read in my data set: library(maps) library(RColorBrewer) d <- read.table("gissum.txt", sep="\t", header=TRUE) #Concatenate state and county info to match maps library d$stcon <- paste(d$state, d$county, sep=",") #Color bins colors = brewer.pal(5, "PuBu") d$colorBuckets <- as.factor(as.numeric(cut(d$count,c(0,10,20,30,40,50,300)))) Here is my matching mapnames <- map("county",plot=FALSE)[4]$names colorsmatched <- d$colorBuckets [na.omit(match(mapnames ,d$stcon))] Plotting: map("county" ,c("new york","new jersey", "connecticut", "pennsylvania") ,col = colors[d$colorBuckets[na.omit(match(mapnames ,d$stcon))]] ,fill = TRUE ,resolution = 0 ,lty = 0 ,lwd= 0.5 ) map("state" ,c("new york","new jersey", "connecticut", "pennsylvania") ,col = "black" ,fill=FALSE ,add=TRUE ,lty=1 ,lwd=2 ) map("county" ,c("new york","new jersey", "connecticut", "pennsylvania") ,col = "black" ,fill=FALSE ,add=TRUE , lty=1 , lwd=.5 ) title(main="Respondent Home ZIP Codes by County") I am sure I am missing something basic re: the order in which the maps function plots items - but I can't seem to figure it out. Thanks for the help. Please let me know if you need any more information.

    Read the article

  • Hadoop File Read

    - by user3684584
    Hadoop Distributed Cache Wordcount example in hadoop 2.2.0. Copied file into hdfs filesystem to be used inside setup of mapper class. protected void setup(Context context) throws IOException,InterruptedException { Path[] uris = DistributedCache.getLocalCacheFiles(context.getConfiguration()); cacheData=new HashMap(); for(Path urifile: uris) { try { BufferedReader readBuffer1 = new BufferedReader(new FileReader(urifile.toString())); String line; while ((line=readBuffer1.readLine())!=null) { System.out.println("**************"+line); cacheData.put(line,line); } readBuffer1.close(); } catch (Exception e) { System.out.println(e.toString()); } } } Inside Driver Main class Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf,args).getRemainingArgs(); if (otherArgs.length != 3) { System.err.println("Usage: wordcount <in> <out>"); System.exit(2); } Job job = new Job(conf, "word_count"); job.setJarByClass(WordCount.class); job.setMapperClass(Map.class); job.setReducerClass(Reduce.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); Path outputpath=new Path(otherArgs[1]); outputpath.getFileSystem(conf).delete(outputpath,true); FileOutputFormat.setOutputPath(job,outputpath); System.out.println("CachePath****************"+otherArgs[2]); DistributedCache.addCacheFile(new URI(otherArgs[2]),job.getConfiguration()); System.exit(job.waitForCompletion(true) ? 0 : 1); But getting exception java.io.FileNotFoundException: file:/home/user12/tmp/mapred/local/1408960542382/cache (No such file or directory) So Cache functionality not working properly. Any Idea ?

    Read the article

  • Help me convert C# 1.1 Xml validation code to C# 2.0 please.

    - by Hamish Grubijan
    It would be fantastic if you could help me rid of these warnings below. I have not been able to find a good document. Since the warnings are concentrated in just the private void ValidateConfiguration( XmlNode section ) section, hopefully this is not terribly hard to answer, if you have encountered this before. Thanks! 'System.Configuration.ConfigurationException.ConfigurationException(string)' is obsolete: 'This class is obsolete, to create a new exception create a System.Configuration!System.Configuration.ConfigurationErrorsException' 'System.Xml.XmlValidatingReader' is obsolete: 'Use XmlReader created by XmlReader.Create() method using appropriate XmlReaderSettings instead. http://go.microsoft.com/fwlink/?linkid=14202' private void ValidateConfiguration( XmlNode section ) { // throw if there is no configuration node. if( null == section ) { throw new ConfigurationException("The configuration section passed within the ... class was null ... there must be a configuration file defined.", section ); } //Validate the document using a schema XmlValidatingReader vreader = new XmlValidatingReader( new XmlTextReader( new StringReader( section.OuterXml ) ) ); // open stream on Resources; the XSD is set as an "embedded resource" so Resource can open a stream on it using (Stream xsdFile = XYZ.GetStream("ABC.xsd")) using (StreamReader sr = new StreamReader(xsdFile)) { vreader.ValidationEventHandler += new ValidationEventHandler(ValidationCallBack); vreader.Schemas.Add(XmlSchema.Read(new XmlTextReader(sr), null)); vreader.ValidationType = ValidationType.Schema; // Validate the document while (vreader.Read()) { } if (!_isValidDocument) { _schemaErrors = _sb.ToString(); throw new ConfigurationException("XML Document not valid"); } } } // Does not cause warnings. private void ValidationCallBack( object sender, ValidationEventArgs args ) { // check what KIND of problem the schema validation reader has; // on FX 1.0, it gives a warning for "<xs:any...skip" sections. Don't worry about those, only set validation false // for real errors if( args.Severity == XmlSeverityType.Error ) { _isValidDocument = false; _sb.Append( args.Message + Environment.NewLine ); } }

    Read the article

  • Why does this Javascript work in FF3.6? new Date("2010-06-09T19:20:30+01:00");

    - by thegaw
    Here's the sample code: var d = new Date("2010-06-09T19:20:30+01:00"); document.write(d); On FF3.6 this will give you: Wed Jun 09 2010 14:20:30 GMT-0400 (EST) Other browers tested; Chrome 5, Safari 4, IE7 give: Invalid Date I know there is limited to no support for ISO8601 dates, but does anyone know what and/or where the difference is in FF3.6 that allows this to work? My thought is that FF is just stripping out what it doesn't understand while the others are not. Has anyone else seen this and/or getting different results from the test script?

    Read the article

  • How to do a Translate Animation for both axis(X, Y) at the same time?

    - by user1235555
    I am doing something like this in my Storyboard method but not able to achieve the desired result. This animation I want to be played after the page is loaded. private void PhoneApplicationPage_Loaded(object sender, RoutedEventArgs e) { CreateTranslateAnimation(image1); } private void CreateTranslateAnimation(UIElement source) { Storyboard sb = new Storyboard(); DoubleAnimationUsingKeyFrames animationFirstX = new DoubleAnimationUsingKeyFrames(); source.RenderTransform = new CompositeTransform(); Storyboard.SetTargetProperty(animationFirstX, new PropertyPath(CompositeTransform.TranslateXProperty)); Storyboard.SetTarget(animationFirstX, source.RenderTransform); animationFirstX.KeyFrames.Add(new EasingDoubleKeyFrame() { KeyTime = kt1, Value = 20 }); DoubleAnimationUsingKeyFrames animationFirstY = new DoubleAnimationUsingKeyFrames(); source.RenderTransform = new CompositeTransform(); Storyboard.SetTargetProperty(animationFirstY, new PropertyPath(CompositeTransform.TranslateYProperty)); Storyboard.SetTarget(animationFirstY, source.RenderTransform); animationFirstY.KeyFrames.Add(new EasingDoubleKeyFrame() { KeyTime = kt1, Value = 30 }); sb.Children.Add(animationFirstX); sb.Children.Add(animationFirstY); sb.Begin(); } Too cut it short... I want to write .cs code equivalent to this code <Storyboard x:Name="Storyboard1"> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.RenderTransform).(CompositeTransform.TranslateX)" Storyboard.TargetName="image1"> <EasingDoubleKeyFrame KeyTime="0:0:0.5" Value="20"/> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.RenderTransform).(CompositeTransform.TranslateY)" Storyboard.TargetName="image1"> <EasingDoubleKeyFrame KeyTime="0:0:0.5" Value="30"/> </DoubleAnimationUsingKeyFrames> </Storyboard>

    Read the article

  • Change object on client side or on server side

    - by Polina Feterman
    I'm not sure what is the best practice. I have some big and complex objects (NOT flat). In that object I have many related objects - for example Invoice is the main class and one of it's properties is invoiceSupervisor - a big class by it's own called User. User can also be not flat and have department property - also an object called Department. For example I want create new Invoice. First way: I can present to client several fields to fill in. Some of them will be combos that I will need to fill with available values. For example available invoiceSupervisors. Then all the chosen values I can send to server and on server I can create new Invoice and assign all chosen values to that new Invoice. Then I will need to assign new supervisor I will pull the chosen User by id that user picked up on server from combobox. I might do some verification on the User such as does the user applicable to be invoice supervisor. Then I will assign the User object to invoiceSupervisor. Then after filling all properties I will save the new invoice. Second way: In the beginning I can call to server to get a new Invoice. Then on client I can fill all chosen values , for example I can call to server to get new User object and then fill it's id from combobox and assign the User as invoiceSupervisor. After filling the Invoice object on client I can send it to server and then the server will save the new invoice. Before saving server can run some validations as well. So what is the best approach - to make the object on client and send it to server or to collect all values from client and to make a new object on server using those values ?

    Read the article

  • How to use custom front at SimpleCursorAdapter for a list view at Searchable Dictionary App??? please help me

    - by user1877275
    I am new in android. this is my frist project.I am tring to make a Name dictionary in bangla so i need to change the front. I already add front into the asset folder.. private void showResults(String query) { Cursor cursor = managedQuery(DictionaryProvider.CONTENT_URI, null, null, new String[] {query}, null); if (cursor == null) { // There are no results mTextView.setText(getString(R.string.no_results, new Object[] {query})); } else { // Display the number of results int count = cursor.getCount(); String countString = getResources().getQuantityString(R.plurals.search_results,count, new Object[] {count, query}); mTextView.setText(countString); // Specify the columns we want to display in the result String[] from = new String[] { DictionaryDatabase.KEY_WORD, DictionaryDatabase.KEY_DEFINITION }; // Specify the corresponding layout elements where we want the columns to go int[] to = new int[] { R.id.word, R.id.definition }; // Create a simple cursor adapter for the definitions and apply them to the ListView SimpleCursorAdapter words = new SimpleCursorAdapter(this, R.layout.result, cursor, from, to); mListView.getAdapter(); // Define the on-click listener for the list items mListView.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> parent, View view, int position, long id) { // Build the Intent used to open WordActivity with a specific word Uri Intent wordIntent = new Intent(getApplicationContext(), WordActivity.class); Uri data = Uri.withAppendedPath(DictionaryProvider.CONTENT_URI, String.valueOf(id)); wordIntent.setData(data); startActivity(wordIntent); } }); } }

    Read the article

  • Use combobox value as query value (VC# 2k8 / ADO SQLITE)

    - by nicodevil
    Hi, I'm a newbie in VC# and ADO SQLITE... I'm trying to change the content of a combobox according to the value present in another one... My problem is here : SQLiteCommand cmd3 = new SQLiteCommand("select distinct(ACTION) from ACTION_LIST where CATEGORY='comboBox1.text'", conn2); What to use to do the job ? Here 'comboBox1.text' is see as a sentence not a variable... Here is the code : private void Form1_Load(object sender, EventArgs e) { using (SQLiteConnection conn1 = new SQLiteConnection(@"Data Source = Data\MRIS_DB_MASTER")) { conn1.Open(); SQLiteCommand cmd2 = new SQLiteCommand("select distinct(CATEGORY) from ACTION_LIST", conn1); SQLiteDataAdapter adapter1 = new SQLiteDataAdapter(cmd2); DataTable tbl1 = new DataTable(); adapter1.Fill(tbl1); comboBox1.DataSource = tbl1; comboBox1.DisplayMember = "CATEGORY"; adapter1.Dispose(); cmd2.Dispose(); } using (SQLiteConnection conn2 = new SQLiteConnection(@"Data Source = Data\MRIS_DB_MASTER")) { conn2.Open(); SQLiteCommand cmd3 = new SQLiteCommand("select distinct(ACTION) from ACTION_LIST where CATEGORY='comboBox1.text'", conn2); SQLiteDataAdapter adapter2 = new SQLiteDataAdapter(cmd3); DataTable tbl2 = new DataTable(); adapter2.Fill(tbl2); comboBox2.DataSource = tbl2; comboBox2.DisplayMember = "ACTION"; adapter2.Dispose(); cmd3.Dispose(); } } Thanks regards

    Read the article

  • Programming Practice

    - by deepti
    public DataTable UserUpdateTempSettings(int install_id, int install_map_id, string Setting_value,string LogFile) { SqlConnection oConnection = new SqlConnection(sConnectionString); DataSet oDataset = new DataSet(); DataTable oDatatable = new DataTable(); SqlDataAdapter MyDataAdapter = new SqlDataAdapter(); try { oConnection.Open(); cmd = new SqlCommand("SP_HOTDOC_PRINTTEMPLATE_PERMISSION", oConnection); cmd.Parameters.Add(new SqlParameter ("@INSTALL_ID", install_id)); cmd.Parameters.Add(new SqlParameter ("@INSTALL_MAP_ID", install_map_id)); cmd.Parameters.Add(new SqlParameter("@SETTING_VALUE", Setting_value)); if (LogFile != "") { cmd.Parameters.Add(new SqlParameter("@LOGFILE",LogFile)); } cmd.CommandType = CommandType.StoredProcedure; MyDataAdapter.SelectCommand = cmd; cmd.ExecuteNonQuery(); MyDataAdapter.Fill(oDataset); oDatatable = oDataset.Tables[0]; return oDatatable; } catch (Exception ex) { Utils.ShowError(ex.Message); return oDatatable; } finally { if ((oConnection.State != ConnectionState.Closed) || (oConnection.State != ConnectionState.Broken)) { oConnection.Close(); } oDataset = null; oDatatable = null; oConnection.Dispose(); oConnection = null; } } i have used execute non query.. normally its not used with data adapter... if iam not using its giving me error.. is it bad programming practice to use execute non query with data adapter

    Read the article

  • Parallelism in .NET – Part 13, Introducing the Task class

    - by Reed
    Once we’ve used a task-based decomposition to decompose a problem, we need a clean abstraction usable to implement the resulting decomposition.  Given that task decomposition is founded upon defining discrete tasks, .NET 4 has introduced a new API for dealing with task related issues, the aptly named Task class. The Task class is a wrapper for a delegate representing a single, discrete task within your decomposition.  We will go into various methods of construction for tasks later, but, when reduced to its fundamentals, an instance of a Task is nothing more than a wrapper around a delegate with some utility functionality added.  In order to fully understand the Task class within the new Task Parallel Library, it is important to realize that a task really is just a delegate – nothing more.  In particular, note that I never mentioned threading or parallelism in my description of a Task.  Although the Task class exists in the new System.Threading.Tasks namespace: Tasks are not directly related to threads or multithreading. Of course, Task instances will typically be used in our implementation of concurrency within an application, but the Task class itself does not provide the concurrency used.  The Task API supports using Tasks in an entirely single threaded, synchronous manner. Tasks are very much like standard delegates.  You can execute a task synchronously via Task.RunSynchronously(), or you can use Task.Start() to schedule a task to run, typically asynchronously.  This is very similar to using delegate.Invoke to execute a delegate synchronously, or using delegate.BeginInvoke to execute it asynchronously. The Task class adds some nice functionality on top of a standard delegate which improves usability in both synchronous and multithreaded environments. The first addition provided by Task is a means of handling cancellation via the new unified cancellation mechanism of .NET 4.  If the wrapped delegate within a Task raises an OperationCanceledException during it’s operation, which is typically generated via calling ThrowIfCancellationRequested on a CancellationToken, or if the CancellationToken used to construct a Task instance is flagged as canceled, the Task’s IsCanceled property will be set to true automatically.  This provides a clean way to determine whether a Task has been canceled, often without requiring specific exception handling. Tasks also provide a clean API which can be used for waiting on a task.  Although the Task class explicitly implements IAsyncResult, Tasks provide a nicer usage model than the traditional .NET Asynchronous Programming Model.  Instead of needing to track an IAsyncResult handle, you can just directly call Task.Wait() to block until a Task has completed.  Overloads exist for providing a timeout, a CancellationToken, or both to prevent waiting indefinitely.  In addition, the Task class provides static methods for waiting on multiple tasks – Task.WaitAll and Task.WaitAny, again with overloads providing time out options.  This provides a very simple, clean API for waiting on single or multiple tasks. Finally, Tasks provide a much nicer model for Exception handling.  If the delegate wrapped within a Task raises an exception, the exception will automatically get wrapped into an AggregateException and exposed via the Task.Exception property.  This exception is stored with the Task directly, and does not tear down the application.  Later, when Task.Wait() (or Task.WaitAll or Task.WaitAny) is called on this task, an AggregateException will be raised at that point if any of the tasks raised an exception.  For example, suppose we have the following code: Task taskOne = new Task( () => { throw new ApplicationException("Random Exception!"); }); Task taskTwo = new Task( () => { throw new ArgumentException("Different exception here"); }); // Start the tasks taskOne.Start(); taskTwo.Start(); try { Task.WaitAll(new[] { taskOne, taskTwo }); } catch (AggregateException e) { Console.WriteLine(e.InnerExceptions.Count); foreach (var inner in e.InnerExceptions) Console.WriteLine(inner.Message); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, our routine will print: 2 Different exception here Random Exception! Note that we had two separate tasks, each of which raised two distinctly different types of exceptions.  We can handle this cleanly, with very little code, in a much nicer manner than the Asynchronous Programming API.  We no longer need to handle TargetInvocationException or worry about implementing the Event-based Asynchronous Pattern properly by setting the AsyncCompletedEventArgs.Error property.  Instead, we just raise our exception as normal, and handle AggregateException in a single location in our calling code.

    Read the article

  • CakePHP Missing Database Table Error

    - by BRADINO
    I am baking a new project management application at work and added a couple new tables to the database today. When I went into the console to bake the new models, they were not in the list... php /path/cake/console/cake.php bake all -app /path/app/ So I manually typed in the model name and I got a missing database table for model error. I checked and double-checked and the database table was named properly. Turns out that some files inside the /app/tmp/cache/ folder were causing Cake not to recognize that I had added new tables to my database. Once I deleted the cache files cake instantly recognized my new database tables and I was baking away! rm -Rf /path/app/tmp/cache/cake*

    Read the article

  • 2010 Collaboration Summit Impressions

    - by Elena Zannoni
    It's a bit late, but there you have it anyway. April 14 to 16 I attended the Linux Foundation Collaboration Summit in SFO. I was running two tracks, one on tracing and one on tools. You can see the tracks and the slides here: http://events.linuxfoundation.org/events/collaboration-summit/slides I was pretty busy both days, Thursday with a whole day tracing track, Friday with a half day toolchain track. The sessions were well attended, the rooms were full, with people spilling in the hallways. Some new things were presented, like Kernelshark, by Steve Rostedt, a GUI (yes, believe it or not, a GUI) written in GTK. It is very nice, showing a timeline for traced kernel events, and you can zoom in and filter at will. It works on the latest kernels, and it requires some new things/fixes in GTK. I don't recall exactly what version of GTK though. Dominique Toupin from Ericsson presented something about user requirements for tracing. Mostly though about who's who in the embedded world, and eclipse. Masami and Mathieu presented an update on their work. See their slides. The interesting thing to me was of course the new version of uprobes w/o underlying utrace presented by Jim Keniston. At the end of the session we had a discussion about the future of utrace. Roland wasn't there, butTom Tromey (also from RedHat) collected the feedback. Basically we are at a standstill now that utrace has been rejected yet again. There wasn't much advise that anybody could give, except jokingly, we decided that the only way in is to make it a part of perf events. There needs to be another refactoring, but most of all, this "killer app" that would be enabled because of utrace hasn't materialized yet. We think that having a good debugging story on Linux is enough of a killer app, for instance allowing multiple tracers, and not relying on SIGCHLD etc. I think this wasn't completely clear to the kernel community. Trying to achieve debugging via a gdb stub inside the kernel interfacing to utrace and that is controlled via the gdb remote protocol also lost its appeal (thankfully, since the gdb remote protocol is archaic). Somebody would have to be creative in how to submit utrace. It doesn't have to be called utrace (it was really a random choice, for lack of a letter that was not already used in front of the word "trace"). So basically, I think the ideas behind utrace are sound, and the necessity of a new interface is acknowledged. But I believe the integration/submission process with the kernel folks has to restart from scratch, clean slate. We'll see. There are many conferences and meetings coming up in the near future where things can be discussed further. On the second day, Friday, we had the tools talks. It was interesting to observe the more "kernel" oriented people's behavior towards the gcc etc community. The first talk was by Mark Mitchell, about Gcc and its new plugin architecture. After that, Paolo talked about the new C++1x standard, which will be finalized in 2011. Many features are already implemented in the libstdc++ library and gcc and usable today. We had a few minutes (really, the half day track was quite short) where Bradley Kuhn from the Software Freedom Law Center explained the GPLv3 exception for gcc (due to the new gcc plugin architecture and the availability of the intermediate results from the compilation, which is a new thing). I will not try to explain, but basically you cannot take the result of the preprocessing and then use that in your own proprietary compiler. After, we had a talk by Ian Taylor about the new Gold linker. One good thing in that area is that they are trying to make gold the new default linker (for instance Fedora will use gold as the distro linker). However gold is very different from binutils' old linker. It doesn't use a linker script, for instance. The kernel has been linked with gold many times as an exercise (the ground work was done by Kris Van Hees), but this needs to be constantly tested/monitored because the kernel linker script is very complex, and uses esoteric features (Wenji is now monitoring that each kernel RC can be built with gold). It was positive that people are now aware of gold and the need for it to be ported to more architectures. It seems that the porting is very easy, with little arch dependent code. Finally Tom Tromey presented about gdb and the archer project. Archer is a development branch of gdb mostly done by RedHat, where they are focusing on better c++ printing, c++ expression parsing, and plugins. The archer work is merged regularly in the gdb mainline. In general it was a good conference. I did miss most of the first day, because that's when I flew in. But I caught a couple of talks. Nothing earth shattering, except for Google giving each person registered a free Android phone. Yey.

    Read the article

  • ASP.NET 4 Unleashed in Bookstores!

    - by Stephen Walther
    I’m happy to announce that ASP.NET 4 Unleashed is now in bookstores! The book is over 1,800 pages and it is packed with code samples and tutorials on all the features of ASP.NET 4. Given the size of the book – did I mention that it is over 1,800 pages? -- I can safely say that it is the most comprehensive book on ASP.NET  This edition of the book has several new chapters written by Kevin Hoffman and Nate Dudek. Kevin and Nate did a fantastic job of covering the new features of ASP.NET 4 including: The new ASP.NET Chart Control The new ASP.NET QueryExtender Control The new ASP.NET routing framework jQuery You can buy the book from your local bookstore or buy the book from Amazon:

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >