Search Results

Search found 9011 results on 361 pages for 'common'.

Page 279/361 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • How to migrate primary key generation from "increment" to "hi-lo"?

    - by Bevan
    I'm working with a moderate sized SQL Server 2008 database (around 120 tables, backups are around 4GB compressed) where all the table primary keys are declared as simple int columns. At present, primary key values are generated by NHibernate with the increment identity generator, which has worked well thus far, but precludes moving to a multiprocessing environment. Load on the system is growing, so I'm evaluating the work required to allow the use of multiple servers accessing a common database backend. Transitioning to the hi-lo generator seems to be the best way forward, but I can't find a lot of detail about how such a migration would work. Will NHibernate automatically create rows in the hi-lo table for me, or do I need to script these manually? If NHibernate does insert rows automatically, does it properly take account of existing key values? If NHibernate does take care of thing automatically, that's great. If not, are there any tools to help? Update NHibernate's increment identifier generator works entirely in-memory. It's seeded by selecting the maximum value of used identifiers from the table, but from that point on allocates new values by a simple increment, without reference back to the underlying database table. If any other process adds rows to the table, you end up with primary key collisions. You can run multiple threads within the one process just fine, but you can't run multiple processes. For comparison, the NHibernate identity generator works by configuring the database tables with identity columns, putting control over primary key generation in the hands of the database. This works well, but compromises the unit of work pattern. The hi-lo algorithm sits inbetween these - generation of primary keys is coordinated through the database, allowing for multiprocessing, but actual allocation can occur entirely in memory, avoiding problems with the unit of work pattern.

    Read the article

  • Need guidelines for optimizing WebGL performance by minimizing shader changes

    - by brainjam
    I'm trying to get an idea of the practicality of WebGL for rendering large architectural interior scenes, consisting of 100K's of triangles. These triangles are distributed over many objects, and there are many materials in the scene. On the other hand, there are no moving parts. And the materials tend to be fairly simple, mostly based on texture maps. There is a lot of texture map sharing .. for example all the chairs in scene will share a common map. There is also some multitexturing - up to three textures overlaid in a material. I've been doing a little experimentation and reading, and gather that frequently switching materials during a rendering pass will slow things down. For example, a scene with 200K triangles will have significant performance differences, depending on whether there are 10 or 1000 objects, assuming that each time an object is displayed a new material is set up. So it seems that if performance is important the scene should be sorted by materials so as to minimize material switching. What I'm looking for is guidelines on how to think of the overhead of various state changes, and where do I get the biggest bang for the buck. For example, what are the relative performance costs of, say, gl.useProgram(), gl.uniformMatrix4fv(), gl.drawElements() should I try to write ubershaders to minimize shader switching? should I try to aggregate geometry to minimize the number of gl.drawElements() calls I realize that mileage may vary depending on browser, OS, and graphics hardware. And I'm also not looking for heroic measures. Just some guidelines from people who have already had some experience in making scenes fast. I'll add that while I've had some experience with fixed-pipeline OpenGL programming in the past, I'm rather new to the WebGL/OpenGL ES 2.0 way of doing things.

    Read the article

  • Python and Unicode: How everything should be Unicode

    - by A A
    Forgive if this a long a question: I have been programming in Python for around six months. Self taught, starting with the Python tutorial and then SO and then just using Google for stuff. Here is the sad part: No one told me all strings should be Unicode. No, I am not lying or making this up, but where does the tutorial mention it? And most examples also I see just make use of byte strings, instead of Unicode strings. I was just browsing and came across this question on SO, which says how every string in Python should be a Unicode string. This pretty much made me cry! I read that every string in Python 3.0 is Unicode by default, so my questions are for 2.x: Should I do a: print u'Some text' or just print 'Text' ? Everything should be Unicode, does this mean, like say I have a tuple: t = ('First', 'Second'), it should be t = (u'First', u'Second')? I read that I can do a from __future__ import unicode_literals and then every string will be a Unicode string, but should I do this inside a container also? When reading/ writing to a file, I should use the codecs module. Right? Or should I just use the standard way or reading/ writing and encode or decode where required? If I get the string from say raw_input(), should I convert that to Unicode also? What is the common approach to handling all of the above issues in 2.x? The from __future__ import unicode_literals statement? Sorry for being a such a noob, but this changes what I have been doing for a long time and so clearly I am confused.

    Read the article

  • is there a Universal Model for languages?

    - by Smandoli
    Many programming languages share generic and even fairly universal features. For example, if you compared Java, VB6, .NET, PHP, Python, then you would find common functions such as control structures, numeric and string manipulation, etc. What has been done to define these features at a meta-language (or language-agnostic) level? UML offers a descriptive reference of software in every aspect, but the real-world focus seems to be data processes. Is UML relevant? I'm not asking "Why we don't have a single language that replaces the current plethora." We need many different tools (at least in this eon). I'm not asking that all languages fit a template -- assembly vs. compiled languages are different enough to make that unfeasible (and some folks call HTML a language, though I wouldn't). Any attempt would start with a properly narrow scope. In line with this, I wouldn't expect the model to cover even a small selection with full validity. I would expect however that such a model could be used to transpose from one language to another (with limited goals -- think jist translation).

    Read the article

  • Email function using templates. Includes via ob_start and global vars

    - by Geo
    I have a simple Email() class. It's used to send out emails from my website. <? Email::send($to, $subj, $msg, $options); ?> I also have a bunch of email templates written in plain HTML pierced with a few PHP variables. E.g. /inc/email/templates/account_created.php: <p>Dear <?=$name?>,</p> <p>Thank you for creating an account at <?=$SITE_NAME?>. To login use the link below:</p> <p><a href="https://<?=$SITE_URL?>/account" target="_blank"><?=$SITE_NAME?>/account</a></p> In order to have the PHP vars rendered I had to include the template into my function. But since include does not return the contents but rather just sends it directly to the output, I had to wrap it with the buffer functions: <? abstract class Email { public static function send($to, $subj, $msg, $options = array()) { /* ... */ ob_start(); include '/inc/email/templates/account_created.php'; $msg = ob_get_clean(); /* ... */ } } After that I realized that the PHP vars are not rendered as they are being inside of the function scope, so I had to globalize the variables inside of the template: <? global $SITE_NAME, $SITE_URL, $name; ?> <p>Dear <?=$name?>,</p> ... So the question is whether there is a more elegant solution to this? Mainly I am concerned about my workarounds using ob_start() and global. For some reason that seems to me odd. Or this is pretty much the common practice?

    Read the article

  • How to genrate a monochrome bit mask for a 32bit bitmap

    - by Mordachai
    Under Win32, it is a common technique to generate a monochrome bitmask from a bitmap for transparency use by doing the following: SetBkColor(hdcSource, clrTransparency); VERIFY(BitBlt(hdcMask, 0, 0, bm.bmWidth, bm.bmHeight, hdcSource, 0, 0, SRCCOPY)); This assumes that hdcSource is a memory DC holding the source image, and hdcMask is a memory DC holding a monochrome bitmap of the same size (so both are 32x32, but the source is 4 bit color, while the target is 1bit monochrome). However, this seems to fail for me when the source is 32 bit color + alpha. Instead of getting a monochrome bitmap in hdcMask, I get a mask that is all black. No bits get set to white (1). Whereas this works for the 4bit color source. My search-foo is failing, as I cannot seem to find any references to this particular problem. I have isolated that this is indeed the issue in my code: i.e. if I use a source bitmap that is 16 color (4bit), it works; if I use a 32 bit image, it produces the all-black mask. Is there an alternate method I should be using in the case of 32 bit color images? Is there an issue with the alpha channel that overrides the normal behavior of the above technique? Thanks for any help you may have to offer! ADDENDUM: I am still unable to find a technique that creates a valid monochrome bitmap for my GDI+ produced source bitmap. I have somewhat alleviated my particular issue by simply not generating a monochrome bitmask at all, and instead I'm using TransparentBlt(), which seems to get it right (but I don't know what they're doing internally that's any different that allows them to correctly mask the image). It might be useful to have a really good, working function: HBITMAP CreateTransparencyMask(HDC hdc, HBITMAP hSource, COLORREF crTransparency); Where it always creates a valid transparency mask, regardless of the color depth of hSource. Ideas?

    Read the article

  • Is it wise to use temporary tables?

    - by Industrial
    Hi guys, We have a mySQL database table for products. We are utilizing a cache layer to reduce database load, but we think that it's a good idea to minimize the actual data needed to be stored in the cache layer to speed up the application further. All the products in the database, that is visible to visitors have a price attached to them: The prices are stored in a different table, called prices . There are multiple price categories depending on which discount level each visitor (customer) applies to. From time to time, there are campaigns which means that a special price for each product is available. The special prices are stored in a table called specials. Is it a bad to make a temp table that binds the tables together? It would only have the neccessary information and would ofcourse be cached. -------------|-------------|------------ | productId | hasPrice | hasSpecial -------------|-------------|------------ 1 | 1 | 0 2 | 1 | 1 By doing such, it would be super easy to know if the specific product really has a price, without having to iterate through the complete prices or specials table each time a product should be listed or presented. Are temp tables a common thing for web applications or is it just bad design?

    Read the article

  • Refactoring exercise with generics

    - by Berryl
    I have a variation on a Quantity (Fowler) class that is designed to facilitate conversion between units. The type is declared as: public class QuantityConvertibleUnits<TFactory> where TFactory : ConvertableUnitFactory, new() { ... } In order to do math operations between dissimilar units, I convert the right hand side of the operation to the equivalent Quantity of whatever unit the left hand side is in, and do the math on the amount (which is a double) before creating a new Quantity. Inside the generic Quantity class, I have the following: protected static TQuantity _Add<TQuantity>(TQuantity lhs, TQuantity rhs) where TQuantity : QuantityConvertibleUnits<TFactory>, new() { var toUnit = lhs.ConvertableUnit; var equivalentRhs = _Convert<TQuantity>(rhs.Quantity, toUnit); var newAmount = lhs.Quantity.Amount + equivalentRhs.Quantity.Amount; return _Convert<TQuantity>(new Quantity(newAmount, toUnit.Unit), toUnit); } protected static TQuantity _Subtract<TQuantity>(TQuantity lhs, TQuantity rhs) where TQuantity : QuantityConvertibleUnits<TFactory>, new() { var toUnit = lhs.ConvertableUnit; var equivalentRhs = _Convert<TQuantity>(rhs.Quantity, toUnit); var newAmount = lhs.Quantity.Amount - equivalentRhs.Quantity.Amount; return _Convert<TQuantity>(new Quantity(newAmount, toUnit.Unit), toUnit); } ... same for multiply and also divide I need to get the typing right for a concrete Quantity, so an example of an add op looks like: public static ImperialLengthQuantity operator +(ImperialLengthQuantity lhs, ImperialLengthQuantity rhs) { return _Add(lhs, rhs); } The question is those verbose methods in the Quantity class. The only change between the code is the math operator (+, -, *, etc.) so it seems that there should be a way to refactor them into a common method, but I am just not seeing it. How can I refactor that code? Cheers, Berryl

    Read the article

  • How to use symbols/punctuation characters in discriminated unions

    - by user343550
    I'm trying to create a discriminated union for part of speech tags and other labels returned by a natural language parser. It's common to use either strings or enums for these in C#/Java, but discriminated unions seem more appropriate in F# because these are distinct, read-only values. In the language reference, I found that this symbol ``...`` can be used to delimit keywords/reserved words. This works for type ArgumentType = | A0 // subject | A1 // indirect object | A2 // direct object | A3 // | A4 // | A5 // | AA // | ``AM-ADV`` However, the tags contain symbols like $, e.g. type PosTag = | CC // Coordinating conjunction | CD // Cardinal Number | DT // Determiner | EX // Existential there | FW // Foreign Word | IN // Preposision or subordinating conjunction | JJ // Adjective | JJR // Adjective, comparative | JJS // Adjective, superlative | LS // List Item Marker | MD // Modal | NN // Noun, singular or mass | NNP // Proper Noun, singular | NNPS // Proper Noun, plural | NNS // Noun, plural | PDT // Predeterminer | POS // Possessive Ending | PRP // Personal Pronoun | PRP$ //$ Possessive Pronoun | RB // Adverb | RBR // Adverb, comparative | RBS // Adverb, superlative | RP // Particle | SYM // Symbol | TO // to | UH // Interjection | VB // Verb, base form | VBD // Verb, past tense | VBG // Verb, gerund or persent participle | VBN // Verb, past participle | VBP // Verb, non-3rd person singular present | VBZ // Verb, 3rd person singular present | WDT // Wh-determiner | WP // Wh-pronoun | WP$ //$ Possessive wh-pronoun | WRB // Wh-adverb | ``#`` | ``$`` | ``''`` | ``(`` | ``)`` | ``,`` | ``.`` | ``:`` | `` //not sure how to escape/delimit this ``...`` isn't working for WP$ or symbols like ( Also, I have the interesting problem that the parser returns `` as a meaningful symbol, so I need to escape it as well. Is there some other way to do this, or is this just not possible with a discriminated union? Right now I'm getting errors like Invalid namespace, module, type or union case name Discriminated union cases and exception labels must be uppercase identifiers I suppose I could somehow override toString for these goofy cases and replace the symbols with some alphanumeric equivalent?

    Read the article

  • Partially constructed object / Multi threading

    - by reto
    Heya! I'm using joda due to it's good reputation regarding multi threading. It goes great distances to make multi threaded date handling efficient, for example by making all Date/Time/DateTime objects immutable. But here's a situation where I'm not sure if Joda is really doing the right thing. It probably is correct, but I'd be very interested to see the explanation for it. When a toString() of a DateTime is being called Joda does the following: /* org.joda.time.base.AbstractInstant */ public String toString() { return ISODateTimeFormat.dateTime().print(this); } All formatters are thread safe, as they are as well ready-only. But what's about the formatter-factory: private static DateTimeFormatter dt; /* org.joda.time.format.ISODateTimeFormat */ public static DateTimeFormatter dateTime() { if (dt == null) { dt = new DateTimeFormatterBuilder() .append(date()) .append(tTime()) .toFormatter(); } return dt; } This is a common pattern in single threaded applications. I see the following dangers: Race condition during null check -- worst case: two objects get created. No Problem, as this is solely a helper object (unlike a normal singleton pattern situation), one gets saved in dt, the other is lost and will be garbage collected sooner or later. the static variable might point to a partially constructed object before the objec has been finished initialization (before calling me crazy, read about a similar situation in this Wikipedia article. So how does Joda ensure that not partially created formatter gets published in this static variable? Thanks for your explanations! Reto

    Read the article

  • sql-server performance optimization by removing print statements

    - by AG
    We're going through a round of sql-server stored procedure optimizations. The one recommendation we've found that clearly applies for us is 'SET NOCOUNT ON' at the top of each procedure. (Yes, I've seen the posts that point out issues with this depending on what client objects you run the stored procedures from but these are not issues for us.) So now I'm just trying to add in a bit of common sense. If the benefit of SET NOCOUNT ON is simply to reduce network traffic by some small amount every time, wouldn't it also make sense to turn off all the PRINT statements we have in the stored procedures that we only use for debugging? I can't see how it can hurt performance. OTOH, it's a bit of a hassle to implement due to the fact that some of the print statements are the only thing within else clauses, so you can't just always comment out the one line and be done. The change carries some amount of risk so I don't want to do it if it isn't going to actually help. But I don't see eliminating print statements mentioned anywhere in articles on optimization. Is that because it is so obvious no one bothers to mention it?

    Read the article

  • ASP.NET MVC Authorize by Subdomain

    - by Jimmo
    I have what seems like a common issue with SaaS applications, but have not seen this question on here anywhere. I am using ASP.NET MVC with Forms Authentication. I have implemented a custom membership provider to handle logic, but have one issue (perhaps the issue is in my mental picture of the system). As with many SaaS apps, customers create accounts and use the app in a way that looks like they are the only ones present (they only see their items, users, etc.). In reality, there are generic controllers and views presenting data depending on the customer represented in the URL. When calling something like the MembershipProvider.ValidateUser, I have access to the user's customer affiliation in the User object - what I don't have is the context of the request to compare whether it is a data request for the same customer as the user. As an example, One company called ABC goes to abc.mysite.com Another company called XYZ goes to xyz.mysite.com When an ABC user calls http://abc.mysite.com/product/edit/12 I have an [Authorize] attribute on the Edit method in the ProductController to make sure he is signed in and has sufficient permission to do so. If that same ABC user tried to access http://xyz.mysite.com/product/edit/12 I would not want to validate him in the context of that call. In the ValidateUser of the MembershipProvider, I have the information about the user, but not about the request. I can tell that the user is from ABC, but I cannot tell that the request is for XYZ at that point in the code. How should I resolve this?

    Read the article

  • Creating content for rails-based applications

    - by Matthias Hryniszak
    Hi, I'm facing a problem of cleaning up my application in Ruby on Rails. What I have is a pretty standard 3-panel, header and footer layout where different parts of the screen contain different functionality. By that I mean for example that the header contains (among others) a select that allows one to select parts of the application and a context-dependent menu. The main content area contains obviously the most interactive stuff whereas side panels contain quick-links with stuff like shopping-cart preview, list of potentially attractive products for the customer, a selector to narrow down the list of options... I was wondering how do I go about simplifying the design. Right now I have the stuff that provides data for the "common" stuff (as opposed to direct content that's placed in the center) called from all the actions (with a filter) but that doesn't feel right for me. I've read that "components" are also not the way to go for obvious performance reasons. Is there something that's more like component-oriented (other frameworks do have that kind of stuff - Grails: <ui:include ../>, ASP.NET MVC: <% Html.RenderAction() %>)? Best regards, Matthias.

    Read the article

  • Powerpoint displays a "can't start the application" error when an Excel Chart object is embedded in

    - by A9S6
    This is a very common problem when Excel Worksheet or Chart is embedded into Word or Powerpoint. I am seeing this problem in both Word and Powerpoint and the reason it seems is the COM addin attached to Excel. The COM addin is written in C# (.NET). See the attached images for error dialogs. I debugged the addin and found a very strange behavior. The OnConnection(...), OnDisConnection(...) etc methods in the COM addin works fine until I add an event handler to the code. i.e. handle the Worksheet_SheetChange, SelectionChange or any similar event available in Excel. As soon as I add even a single event handler (though my code has several), Word and Powerpoint start complaining and do not Activate the embedded object. On some of the posts on the internet, people have been asked to remove the anti-virus addins for office (none in my case) so this makes me believe that the problem is somewhat related to COM addins which are loaded when the host app activates the object. Does anyone have any idea of whats happening here?

    Read the article

  • Returning the same type the function was passed

    - by Ken Bloom
    I have the following code implementation of Breadth-First search. trait State{ def successors:Seq[State] def isSuccess:Boolean = false def admissableHeuristic:Double } def breadthFirstSearch(initial:State):Option[List[State]] = { val open= new scala.collection.mutable.Queue[List[State]] val closed = new scala.collection.mutable.HashSet[State] open.enqueue(initial::Nil) while (!open.isEmpty){ val path:List[State]=open.dequeue() if(path.head.isSuccess) return Some(path.reverse) closed += path.head for (x <- path.head.successors) if (!closed.contains(x)) open.enqueue(x::path) } return None } If I define a subtype of State for my particular problem class CannibalsState extends State { //... } What's the best way to make breadthFirstSearch return the same subtype as it was passed? Supposing I change this so that there are 3 different state classes for my particular problem and they share a common supertype: abstract class CannibalsState extends State { //... } class LeftSideOfRiver extends CannibalsState { //... } class InTransit extends CannibalsState { //... } class RightSideOfRiver extends CannibalsState { //... } How can I make the types work out so that breadthFirstSearch infers that the correct return type is CannibalsState when it's passed an instance of LeftSideOfRiver? Can this be done with an abstract type member, or must it be done with generics?

    Read the article

  • Cannot launch 16-bit application anymore

    - by Nick Bedford
    I'm trying to debug and resolve some issues with a Win32 macro application written C++ however I'm having the strangest issue. I have to launch a 16-bit program and then simulate entering data into and have been using ShellExecute for over two years now. I haven't touched this actual code at all, but now it doesn't work. I'm doing ShellExecute(NULL, "open", exe_path.c_str(), NULL, "", SW_SHOWDEFAULT);. This has worked flawlessly for years but all of sudden, it stopped working. It gives me an ACCESS_DENIED error code. I've Googled and apparently this is a pretty common issue with launching 16-bit apps. The workstation XP SP2 environment hasn't changed at all, and it was actually working until I rebuilt a little while ago (I've rebuilt it before many times). The code is inside a window procedure function and when I take it out and launch the program in the WinMain function it works, but the code has to be in the window procedure... I've tried numerous alternatives but they all give the same issue. The biggest issue with this is it was working then all of a sudden decided it wasn't going to with no change to both code and environment! In fact, it was about half way through testing changes that it thought it'd stop working. Please help as I cannot do anything without the program launching. It's the first step in the code that I'm debugging!

    Read the article

  • Testing for interface implementation in WCF/SOA

    - by rabidpebble
    I have a reporting service that implements a number of reports. Each report requires certain parameters. Groups of logically related parameters are placed in an interface, which the report then implements: [ServiceContract] [ServiceKnownType(typeof(ExampleReport))] public interface IService1 { [OperationContract] void Process(IReport report); } public interface IReport { string PrintedBy { get; set; } } public interface IApplicableDateRangeParameter { DateTime StartDate { get; set; } DateTime EndDate { get; set; } } [DataContract] public abstract class Report : IReport { [DataMember] public string PrintedBy { get; set; } } [DataContract] public class ExampleReport : Report, IApplicableDateRangeParameter { [DataMember] public DateTime StartDate { get; set; } [DataMember] public DateTime EndDate { get; set; } } The problem is that the WCF DataContractSerializer does not expose these interfaces in my client library, thus I can't write the generic report generating front-end that I plan to. Can WCF expose these interfaces, or is this a limitation of the serializer? If the latter case, then what is the canonical approach to this OO pattern? I've looked into NetDataContractSerializer but it doesn't seem to be an officially supported implementation (which means it's not an option in my project). Currently I've resigned myself to including the interfaces in a library that is common between the service and the client application, but this seems like an unnecessary extra dependency to me. Surely there is a more straightforward way to do this? I was under the impression that WCF was supposed to replace .NET remoting; checking if an object implements an interface seems to be one of the most basic features required of a remoting interface?

    Read the article

  • How to run White + SL4 UATs through TeamCity?

    - by Duncan Bayne
    After experiencing a series of unpleasant issues with TFS, including source code orruption and project management inflexibility, we (meaning the project team of which I'm a part) have decided to move from TFS 2010 to TeamCity + SVN + V1. I've managed to get our MSTest component and unit tests running as part of every build. However, our UATs are failing, and I was hoping for some advice from the TeamCity community as to best practices w.r.t. running web servers and interacting with the desktop. Each of our UAT fixtures starts a web server to host the site, like this: public static void StartWebServer() { var pathToSite = @"C:\projects\myproject\FrontEnd\MyProject.FrontEnd.Web"; var webServer = new Process { StartInfo = new ProcessStartInfo { Arguments = string.Format("/port:9150 /path:\"{0}\"", pathToSite), FileName = @"C:\Program Files (x86)\Common Files\microsoft shared\DevServer\10.0\WebDev.WebServer40.EXE" } }; webServer.Start(); } Needless to say, this doesn't work when running through TeamCity, as the pathToSite value is different each time. I'm hoping there is a way of determining the path into which the the code is checked out prior to building? That would allow me to point the web server at the right place. The other issue is that our UATs use White to drive the Silverlight UI through an instance of Internet Explorer: _browserWindow = InternetExplorer.Launch("http://localhost:9150/index.html#/Home", "Home - Windows Internet Explorer"); _document = _browserWindow.SilverlightDocument; I've ensured that the TeamCity service is granted the ability to interact with the desktop, and I've set the build agent machine up to log in automatically (an open session is a pre-requisite for White to work properly). Is that all I need to do or are there additional steps required?

    Read the article

  • Is a GWT app running on Google App Engine protected from CSRF

    - by gerdemb
    I'm developing a GWT app running on the Google App Engine and wondering if I need to worry about Cross-site request forgery or is that automatically taken care of for me? For every RPC request that requires authentication, I have the following code: public class BookServiceImpl extends RemoteServiceServlet implements BookService { public void deleteInventory(Key<Inventory> inventoryKey) throws NotLoggedInException, InvalidStateException, NotFoundException { DAO dao = new DAO(); // This will throw NotLoggedInException if user is not logged in User user = dao.getCurrentUser(); // Do deletion here } } public final class DAO extends DAOBase { public User getCurrentUser() throws NotLoggedInException { currentUser = UserServiceFactory.getUserService().getCurrentUser(); if(currentUser == null) { throw new NotLoggedInException(); } return currentUser; } I couldn't find any documentation on how the UserService checks authentication. Is it enough to rely on the code above or do I need to to more? I'm a beginner at this, but from what I understand to avoid CSRF attacks some of the strategies are: adding an authentication token in the request payload instead of just checking a cookie checking the HTTP Referer header I can see that I have cookies set from Google with what look like SID values, but I can't tell from the serialized Java objects in the payloads if tokens are being passed or not. I also don't know if the Referer header is being used or not. So, am I worrying about a non-issue? If not, what is the best strategy here? This is a common enough problem, that there must be standard solutions out there...

    Read the article

  • recommended format to save time with MJD + BCD format in database

    - by pierr
    Hi, There is a time represented in MJD and BCD format with 5 bytes .I am wondering what is the recommended format to save this date-time in the sqlite database so that user can search against it ? My first attempt is to save it just as it is, that is a 5 bytes string. The user will use the same format to search and the result will be converted to unix time by the user with following code. However, later, I was suggested to save the time in the integer - the UTC time, for example. But I can not find a standard way to do the conversion. I feel this is a common issue and would like to hear your comments. time_t sidate_to_unixtime(unsigned char sidate[]) { int k = 0; struct tm tm; double mjd; /* check for the undefined value */ if ((sidate[0] == 0xff) && (sidate[1] == 0xff) && (sidate[2] == 0xff) && (sidate[3] == 0xff) && (sidate[4] == 0xff)) { return -1; } memset(&tm, 0, sizeof(tm)); mjd = (sidate[0] << 8) | sidate[1]; tm.tm_year = (int) ((mjd - 15078.2) / 365.25); tm.tm_mon = (int) (((mjd - 14956.1) - (int) (tm.tm_year * 365.25)) / 30.6001); tm.tm_mday = (int) mjd - 14956 - (int) (tm.tm_year * 365.25) - (int) (tm.tm_mon * 30.6001); if ((tm.tm_mon == 14) || (tm.tm_mon == 15)) k = 1; tm.tm_year += k; tm.tm_mon = tm.tm_mon - 2 - k * 12; tm.tm_sec = bcd_to_integer(sidate[4]); tm.tm_min = bcd_to_integer(sidate[3]); tm.tm_hour = bcd_to_integer(sidate[2]); return mktime(&tm); }

    Read the article

  • how can i learn Enterprise library 4.0 ?

    - by ykaratoprak
    I try to learn Enterprise Library. i find these useful codes to get data from sql. But i try to send data via parameter. also use UPDATE, DELETE, SAVe method. Do you give sama sample? i'm using enterprise 4.0 !!! using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Data; using Microsoft.Practices.EnterpriseLibrary.Common; using Microsoft.Practices.EnterpriseLibrary.Data; namespace WebApplicationForEnterpirires { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Database objdbase = DatabaseFactory.CreateDatabase("connectionString"); DataSet ds = objdbase.ExecuteDataSet(CommandType.StoredProcedure, "sp_GetProducts"); GridView1.DataSource = ds; GridView1.DataBind(); } } }

    Read the article

  • How to rdc to a particular machine that is member of a TS Farm?

    - by Amit Arora
    I created a Terminal Services farm comprising of 3 TS hosts (say, TS1, TS2 and TS3) running Windows 2008 R2 Enterprise, a TS Connection broker and a TS Gateway for the purpose of hosting a windows application as a TS RemoteApp. The setup works just fine. Now, I want to do some further configuration changes on a particular TS host, say TS2 and not on any other TS host. I try to rdc to TS2 but I find myself getting connected to a randomly chosen TS host (sometimes TS1, sometimes TS2, and at other times, TS3). I think rdc connection is also going via the Connection Broker that is forwarding me to a TS host it decides is best. Is there a way I can deterministically connect to a particular TS host using rdc? I don't have option to login locally on a TS host as the entire setup is hosted in a remote data center. I think this is a very common scenario and must have a straight forward solution. It could be as easy as doing rdc to Connection Broker server and disabling it for a while, but I don't know how to do that too. Any help will be highly appreciated.

    Read the article

  • .NET Web Service hydrate custom class

    - by row1
    I am consuming an external C# Web Service method which returns a simple calculation result object like this: [Serializable] public class CalculationResult { public string Name { get; set; } public string Unit { get; set; } public decimal? Value { get; set; } } When I add a Web Reference to this service in my ASP .NET project Visual Studio is kind enough to generate a matching class so I can easily consume and work with it. I am using Castle Windsor and I may want to plug in other method of getting a calculation result object, so I want a common class CalculationResult (or ICalculationResult) in my solution which all my objects can work with, this will always match the object returned from the external Web Service 1:1. Is there anyway I can tell my Web Service client to hydrate a particular class instead of its generated one? I would rather not do it manually: foreach(var fromService in calcuationResultsFromService) { ICalculationResult calculationResult = new CalculationResult() { Name = fromService.Name }; yield return calculationResult; } Edit: I am happy to use a Service Reference type instead of the older Web Reference.

    Read the article

  • Discussion on SEO best-practices for site development involving php...

    - by Bradley Herman
    Recently in our work, I've started getting some experience with SEO (finally). It's something I've put off for a long time because I've always maintained that SEO is a buzz-word b.s. pseudo-science and more about providing quality, relevant content (assuming proper header tags and the basics are covered). However, sometimes a client doesn't have stellar content yet still demands SEO and high rankings. While it's not how I design sites 100% of the time (as design dictates structure), I typically create a basic template from the design my boss gives me, then I optimize it, and then strip the top and bottom and move those to header.php and footer.php, using the following to bring in the header and footer based on AJAX versus HTML requests: <?php if($_SERVER['HTTP_X_REQUESTED_WITH']==''){ include('includes/header.php'); }?> #content here <?php if($_SERVER['HTTP_X_REQUESTED_WITH']==''){ include('includes/footer.php'); }?> Then, I use jQuery to intercept page requests and I use AJAX to fill in, for example, a #copy div with the new content. This avoids unnecessarily loading all the header and footer info everytime, but still allows users without Java to access pages without any problems. (also to think about, depending on size of content, do the extra http requests added using this method render it more of a server strain versus a single, larger file?) I don't have a really solid understanding of the meta keywords and their SEO significance, but as I recall reading, the keywords, title, and description on a page should match up to the pages content--ie. each page should have slightly different keywords/description while retaining some common ground. What I'm getting at here is trying to foster a discussion on whether my approach is flawed to begin with, if there are things I can do (within reason) that keep the site structure simple but allow for better SEO practices, or if my SEO understandings are wrong. This isn't a question, per say, but hopefully a constructive discussion here that more than just I can learn from. I appreciate any responses and hope to hear from you. Thanks!

    Read the article

  • How to accommodate for the iPhone 4 screen resolution?

    - by dontWatchMyProfile
    This is a programming question! Read on before you vote to close! According to Apple, the iPhone 4 has a new and better screen resolution: 3.5-inch (diagonal) widescreen Multi-Touch display 960-by-640-pixel resolution at 326 ppi This little detail affects our apps in a heavy way. Most of the demo apps on the net have one thing in common: They position views in the believe that the screen has a fixed size of 320 x 480 pixels. So what most -if not all- developers do is: They designed everything in such a way, that a touchable area is -for example- 50 x 50 pixels big. Just enough to tap it. Things have been positioned relative to the upper left, to reach a specific position on screen - let's say the center, or somewhere at the bottom. Edit: It seems Apple has integrated an switch that allows to tell if an app is highRes or not. Nice. When we develop high-resolution apps, they probably won't work on older devices. And if they do, they would suffer a lot from 4-times the size of any image, having to scale them down in memory.

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >