Search Results

Search found 20201 results on 809 pages for 'more info needed'.

Page 524/809 | < Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >

  • Keep taking the tablets

    - by Roger Hart
    A guest editorial for the SimpleTalk newsletter. So why would Red Gate build an Ipad Game? Is it just because tablet devices are exciting and cool? Ok, maybe a little. Mostly, it was seeing that the best existing tablet and smartphone apps do simple, intuitive things, using simple intuitive interfaces to solve single problems. That's pretty close to what we call our own "intuitively simple" approach to software. Tablets and mobile could be fantastic for us, if we can identify those problems that a tablet device can solve. How do you create THE next tool for a completely new technology? We're glad we don't face that problem every day, but it's pretty exciting when we do. We figure we should learn by doing. We created "MobileFoo" (a Red Gate Company) , we picked up some shiny Apple tech, and got to grips with Objective C, and life in the App Store ecosystem. The result so far is an iPad game: Stacks and Heaps It's Rob and Marine's spin on Snakes and Ladders. Instead of snakes we have unhandled exceptions, a blue screen of death, and other hazards. We wanted something compellingly geeky on mobile, and we're pretty sure we've got it. It's trudging through App Store approval as we speak. but if you want to get an idea of what it is like to switch from .net to Objective C, take a look at Rob's post Android and iOS is quite a culture-change for Windows developers. So to give them a feel for the problems real users might have, we needed some real users - we offered our colleagues subsidised tablets. The only conditions were that they get used at work, and we get the feedback. Seeing tablets around the office is starting to give us some data points: Is typing the bottleneck? Will tablets ever cut it as text-entry devices, and could we fix it? Is mobile working held up by the pain of connecting to work LANs? How about security? Multi-tasking will let tablets do more. They're small, easy to use, almost instant to switch on, and connect by Wi Fi. There's plenty on that list to make a sysadmin twitchy. We'll find out as people spend more time working with these devices, and we'd love to hear what you think about tablet devices too. (comments are filtered, what with the spam)

    Read the article

  • When to use HTTP status code 404 in an API

    - by Sybiam
    I am working on a project and after arguing with people at work for about more than a hour. I decided to know what people on stack-exchange might say. We're writing an API for a system, there is a query that should return a tree of Organization or a tree of Goals. The tree of Organization is the organization in which the user is present, In other words, this tree should always exists. In the organization, a tree of goal should be always present. (that's where the argument started). In case where the tree doesn't exist, my co-worker decided that it would be right to answer response with status code 200. And then started asking me to fix my code because the application was falling apart when there is no tree. I'll try to spare flames and fury. I suggested to raise a 404 error when there is no tree. It would at least let me know that something is wrong. When using 200, I have to add special check to my response in the success callback to handle errors. I'm expecting to receive an object, but I may actually receive an empty response because nothing is found. It sounds totally fair to mark the response as a 404. And then war started and I got the message that I didn't understand HTTP status code schema. So I'm here and asking what's wrong with 404 in this case? I even got the argument "It found nothing, so it's right to return 200". I believe that it's wrong since the tree should be always present. If we found nothing and we are expecting something, it should be a 404. More info, I forgot to add the urls that are fetched. Organizations /OrgTree/Get Goals /GoalTree/GetByDate?versionDate=... /GoalTree/GetById?versionId=... My mistake, both parameters are required. If any versionDate that can be parsed to a date is provided, it will return the closes revision. If you enter something in the past, it will return the first revision. If by Id with a id that doesn't exists, I suspect it's going to return an empty response with 200. Extra Also, I believe the best answer to the problem is to create default objects when organizations are created, having no tree shouldn't be a valid case and should be seen as an undefined behavior. There is no way an account can be used without both trees. For that reasons, they should be always present. also I got linked this (one similar but I can't find it) http://viswaug.files.wordpress.com/2008/11/http-headers-status1.png

    Read the article

  • Security Controls on data for P6 Analytics

    - by Jeffrey McDaniel
    The Star database and P6 Analytics calculates security based on P6 security using OBS, global, project, cost, and resource security considerations. If there is some concern that users are not seeing expected data in P6 Analytics here are some areas to review: 1. Determining if a user has cost security is based on the Project level security privileges - either View Project Costs/Financials or Edit EPS Financials. If expecting to see costs make sure one of these permissions are allocated.  2. User must have OBS access on a Project. Not WBS level. WBS level security is not supported. Make sure user has OBS on project level.  3. Resource Access is determined by what is granted in P6. Verify the resource access granted to this user in P6. Resource security is hierarchical. Project access will override Resource access based on the way security policies are applied. 4. Module access must be given to a P6 user for that user to come over into Star/P6 Analytics. For earlier version of RDB there was a report_user_flag on the Users table. This flag field is no longer used after P6 Reporting Database 2.1. 5. For P6 Reporting Database versions 2.2 and higher, the Extended Schema Security service must be run to calculate all security. Any changes to privileges or security this service must be rerun before any ETL. 6. In P6 Analytics 2.0 or higher, a Weblogic user must exist that matches the P6 username. For example user Tim must exist in P6 and Weblogic users for Tim to be able to log into P6 Analytics and access data based on  P6 security.  In earlier versions the username needed to exist in RPD. 7. Cache in OBI is another area that can sometimes make it seem a user isn't seeing the data they expect. While cache can be beneficial for performance in OBI. If the data is outdated it can retrieve older, stale data. Clearing or turning off cache when rerunning a query can determine if the returned result set was from cache or from the database.

    Read the article

  • Please help me give this principle a name

    - by Brent Arias
    As a designer, I like providing interfaces that cater to a power/simplicity balance. For example, I think the LINQ designers followed that principle because they offered both dot-notation and query-notation. The first is more powerful, but the second is easier to read and follow. If you disagree with my assessment of LINQ, please try to see my point anyway; LINQ was just an example, my post is not about LINQ. I call this principle "dial-able power". But I'd like to know what other people call it. Certainly some will say "KISS" is the common term. But I see KISS as a superset, or a "consumerism" practice. Using LINQ as my example again, in my view, a team of programmers who always try to use query notation over dot-notation are practicing KISS. Thus the LINQ designers practiced "dial-able power", whereas the LINQ consumers practice KISS. The two make beautiful music together. I'll give another example. Imagine a C# logging tool that has two signatures allowing two uses: void Write(string message); void Write(Func<string> messageCallback); The purpose of the two signatures is to fulfill these needs: //Every-day "simple" usage, nothing special. myLogger.Write("Something Happened" + error.ToString() ); //This is performance critical, do not call ToString() if logging is //disabled. myLogger.Write( () => { "Something Happened" + error.ToString() }); Having these overloads represents "dial-able power," because the consumer has the choice of a simple interface or a powerful interface. A KISS-loving consumer will use the simpler signature most of the time, and will allow the "busy" looking signature when the power is needed. This also helps self-documentation, because usage of the powerful signature tells the reader that the code is performance critical. If the logger had only the powerful signature, then there would be no "dial-able power." So this comes full-circle. I'm happy to keep my own "dial-able power" coinage if none yet exists, but I can't help think I'm missing an obvious designation for this practice. p.s. Another example that is related, but is not the same as "dial-able power", is Scott Meyer's principle "make interfaces easy to use correctly, and hard to use incorrectly."

    Read the article

  • Softpedia published some of my open source projects — how to react?

    - by polarblau
    (FYI: I've just moved this question over from Stackoverflow on recommendation.) I just received a few emails, informing me that softpedia.com has added some of my "products" to their "database of scripts, code snippets and web applications". My products are in this case some smaller open source projects, which I have hosted and published on github. Now I'm wondering how to react to this. This site is indirectly making money of my free work through ads on three pages before the actual download. They also seem to "invent" version numbers and I can't find out if they're hosting the latest or all versions of my projects. — I can see how this could lead to problems in the future, since I don't control what's "the latest" everywhere. On the other hand I don't mind some extra publicity. I want as many people as possible to know about the projects, use them, fork them and hopefully improve them. The projects in questions are really fairly small, but this might not be the case in the future for me and/or other people reading this question. I'm sure that this must have happened to others around here. What's your opinion? Should I try to get the downloads removed? Update 1 I've requested the removal and mentioned that I don't feel that Softpedia can provide the right environment for this kind of project. Their team got back to me instantly with a friendly email saying, that they'll remove the links for now: If you are worried that your projects won't be updated, then I must tell you that I have them bookmarked in my RSS reader, so any version changes will be forwarded to me when needed. So I promise I'll keep your script up to date as soon as I see an update in the repository. I have to say, that I appreciate this kind of reaction quite a lot and so I sent them another email, describing in more detail what I'm worried about and what bothers me. I also stated, that I'm aware that my license clearly permits them to host the projects in any case, but that I'd be even happy if they would host the projects as long as they could convince me of a few details and maybe make some small changes to the way the projects are represented. — Let's see where this goes. Update 2 After discussing with their contact and requesting some changes regarding display of version (they had given the possibility to do so) and authorship they put the projects back up on their site. All in all a positive and definitely interesting experience.

    Read the article

  • New Options for MySQL High Availability

    - by Mat Keep
    Data is the currency of today’s web, mobile, social, enterprise and cloud applications. Ensuring data is always available is a top priority for any organization – minutes of downtime will result in significant loss of revenue and reputation. There is not a “one size fits all” approach to delivering High Availability (HA). Unique application attributes, business requirements, operational capabilities and legacy infrastructure can all influence HA technology selection. And then technology is only one element in delivering HA – “People and Processes” are just as critical as the technology itself. For this reason, MySQL Enterprise Edition is available supporting a range of HA solutions, fully certified and supported by Oracle. MySQL Enterprise HA is not some expensive add-on, but included within the core Enterprise Edition offering, along with the management tools, consulting and 24x7 support needed to deliver true HA. At the recent MySQL Connect conference, we announced new HA options for MySQL users running on both Linux and Solaris: - DRBD for MySQL - Oracle Solaris Clustering for MySQL DRBD (Distributed Replicated Block Device) is an open source Linux kernel module which leverages synchronous replication to deliver high availability database applications across local storage. DRBD synchronizes database changes by mirroring data from an active node to a standby node and supports automatic failover and recovery. Linux, DRBD, Corosync and Pacemaker, provide an integrated stack of mature and proven open source technologies. DRBD Stack: Providing Synchronous Replication for the MySQL Database with InnoDB Download the DRBD for MySQL whitepaper to learn more, including step-by-step instructions to install, configure and provision DRBD with MySQL Oracle Solaris Cluster provides high availability and load balancing to mission-critical applications and services in physical or virtualized environments. With Oracle Solaris Cluster, organizations have a scalable and flexible solution that is suited equally to small clusters in local datacenters or larger multi-site, multi-cluster deployments that are part of enterprise disaster recovery implementations. The Oracle Solaris Cluster MySQL agent integrates seamlessly with MySQL offering a selection of configuration options in the various Oracle Solaris Cluster topologies. Putting it All Together When you add MySQL Replication and MySQL Cluster into the HA mix, along with 3rd party solutions, users have extensive choice (and decisions to make) to deliver HA services built on MySQL To make the decision process simpler, we have also published a new MySQL HA Solutions Guide. Exploring beyond just the technology, the guide presents a methodology to select the best HA solution for your new web, cloud and mobile services, while also discussing the importance of people and process in ensuring service continuity. This is subject recently presented at Oracle Open World, and the slides are available here. Whatever your uptime requirements, you can be sure MySQL has an HA solution for your needs Please don't hesitate to let us know of your HA requirements in the comments section of this blog. You can also contact MySQL consulting to learn more about their HA Jumpstart offering which will help you scope out your scaling and HA requirements.

    Read the article

  • Comparison of Extreme Programming (XP) to Traditional Programming Methodologies

    The comparison of extreme programming (XP) to traditional programming methodologies can find similarities between the historic biblical battle between David and Goliath. Goliath of Gath is a Philistine warrior renowned for his size, strength and battle tested skills. Much like Goliath, traditional methodologies are known to be cumbersome due to large amounts of documentation, and time consuming do to the time needed to gather all the information. However, traditional methodologies have been widely accepted by the software development community for years because of its attention to detail regarding project development and maintenance. David is a male Israelite teenager, who was small, fearless, and untrained in any type of formal combat. In a similar fashion, extreme programming focuses more on code over documentation so that time is spent on developing the project and not on cumbersome documentation of a project. Typically, project managers and developers are fearless when they start this type of project because they usually start with little to no documentation, and they expect to be given changes to be implemented at the start of every new project iteration. Because of the lack of need or desire for documentation in extreme programming projects they appear to act as if there is no formal process involved in developing an extreme programming project.  This is a misnomer, because of the consistent development iterations and interaction with clients and users the quickly takes form because each iteration allows the project to be refined as the customer needs and desires change. Ravikant Agarwal and David Umphress documented a new approach to extreme programming called personal extreme programming (PXP) at the ACM Southeast Regional Conference in 2008. PXP is the application of extreme programming core concepts in a single developer team environment.  PXP focuses on how to adjust the main concepts and practices of extreme programming that is typically centered in a group environment and how they can be altered to be beneficial for a single developer environment. Suzanne Smith and Sara Stoecklin are both advocates of extreme programming according to the Journal of Computing Sciences in Colleges and in fact they feel that it should receive more attention in introductory programming classes to allow students to better understand the software development process. Reasons why extreme programming is a good thing: Developers get to do more of what they love, Develop. Traditional software development methodologies tend to  add additional demands on a project by requiring all requirements and project specifications to be fully defined prior to the start of the implementation phase of a project. A standard 40 hour work week. With limiting the work week to only 40 hours prevents developers from getting burned out on projects.

    Read the article

  • More Changes...

    - by MOSSLover
    Stuff has changed drastically for me in the past two to three years.  I moved over 1000 miles from Saint Louis.  I go outside and I get up in front of crowds with less issues.  Now I'm changing jobs again.  I'm not really sure what to say here.  I was obviously unhappy and I needed to do something different.  So quit two days ago and I guess it worked out that I end with B&R this Friday, then head to TEC and SPS Huntsville and a week from this Monday I start my new job at Gig Werks.  I'm not sure what to expect or where I'm heading, but I think it's a step in the right direction.  I won't really know what kind of impact this will have on my life for at least another 6 months to a year. For some reason I can't sleep tonight and I think it's really a reflection of my last day.  Tomorrow is an ending and a beginning at the same time.  So it's both kind of sad and exciting.  I don't know why I'm really excited to go to Disney Land for the second time ever in my life time.  I get to ride the Teacups.  For the longest time when I was a kid I wanted to go to Disney Land.  I wanted to ride the teacups.  In 2007, at the age of 25, I rode the teacups for my first ever visit to LA.  That was the start of finally syncing up with my childhood goals.  I wanted to live near a major city.  I wanted to visit all the major cities in the world.  I wanted to see everything and meet everyone.  This job change will probably turn into something great I just don't know it yet.  I'm walking again outside my comfort zone and stepping into uncharted territory.  In 2-3 years I'll probably write another blog post how this week lead to something great.  It just stinks when you have to leave behind something you know and love.  I will miss all my current colleagues, but I'm sure I'll gain some new ones and keep in touch with the old.  To 2010 being a great year for change and hopefully by the end of the year I can say I went to Europe.  To reaching my goals and my dreams.  Don't let anyone stop you from getting what you want in life (unless you are axe murderer please don't kill anyone that's just wrong).  Have a good weekend everyone!

    Read the article

  • Log Debug Messages without Debug Serial on Shipped Device

    - by Kate Moss' Open Space
    Debug message is one of the ancient but useful way for problem resolving. Message is redirected to PB if KITL is enabled otherwise it goes to default debug port, usually a serial port on most of the platform but it really depends on how OEMWriteDebugString and OEMWriteDebugByte are implemented. For many reasons, we don't want to have a debug serial port, for example, we don't have enough spare serial ports and it can affect the performance. So some of the BSP designers decide to dump the messages into other media, could be a log file, shared memory or any solution that is suitable for the need. In CE 5.0 and previous, OAL and Kernel are linked into one binaries; in the other word, you can use whatever function in kernel, such as SC_CreateFileW to access filesystem in OAL, even this is strongly not recommended. But since the OAL is being a standalone executable in CE 6.0, we no longer can use this back door but only interface exported in NKGlobal which just provides enough for OAL but no more. Accessing filesystem or using sync object to communicate to other drivers or application is even not an option. Sounds like the kernel lock itself up; of course, OAL is in kernel space, you can still do whatever you want to hack into kernel, but once again, it is not only make it a dirty solution but also fragile. So isn't there an elegant solution? Let's see how a debug message print out. In private\winceos\COREOS\nk\kernel\printf.c, the OutputDebugStringW is the one for pumping out the messages; most of the code is for error handling and serialization but what really interesting is the following code piece     if (g_cInterruptsOff) {         OEMWriteDebugString ((unsigned short *)str);     } else {         g_pNKGlobal->pfnWriteDebugString ((unsigned short *)str);     }     CELOG_OutputDebugString(dwActvProcId, dwCurThId, str); It outputs the message to default debug output (is redirected to KITL when available) or OAL when needed but note that highlight part, it also invokes CELOG_OutputDebugString. Follow the thread to private\winceos\COREOS\nk\logger\CeLogInstrumentation.c, this function dump whatever input to CELOG. So whatever the debug message is we always got a clone in CELOG. General speaking, all of the debug message is logged to CELOG already, so what you need to do is using celogflush.exe with CELZONE_DEBUG zone, and then viewing the data using the by Readlog tool. Here are some information about these tools CELOG - http://msdn.microsoft.com/en-us/library/ee479818.aspx READLOG - http://msdn.microsoft.com/en-us/library/ee481220.aspx Also for advanced reader, I encourage you to dig into private\winceos\COREOS\nk\celog\celogdll, the source of CELOG.DLL and use it as a starting point to create a more lightweight debug message logger for your own device!

    Read the article

  • Voxel Engine in Multiplayer?

    - by Oliver Schöning
    This is a question more out of Interest for now, because I am not even near to the point that I could create this project at the moment. I really like the progress on the Atomontage Engine. A Voxel engine that is WIP at the moment. I would like to create a Voxel SERVER eventually. First in JavaScript (That's what I am learning right now) later perhaps in C++ for speed. Remember, I am perfectly aware that this is very hard! This is a brainstorm for the next 10 years as for now. What I would like to achieve one day is a Multiplayer Game in the Browser where the voxels positions are updated by XYZ input from the server. The Browser Does only 3 things: sending player input to the server, updating Voxel positions send from the server and rendering the world. I imagine using something like the Three.js libary on the client side. So that would be my programming dream right there... Now to something simpler for the near future. Right now I am learning javascript. And I am making games with Construct2. (A really cool JavaScript "game maker") The plan is to create a 2D Voxel enviorment (Block Voxels) on the Socket.IO Server* and send the position of the Voxels and Players to the Client side which then positions the Voxel Blocks to the Server Output coordinates. I think that is a bit more manageable then the other bigger idea. And also there should be no worries about speed with this type of project in JavaScript (I hope). Extra Info: *I am using nodejs (Without really knowing what it does besides making Socket.IO work) So now some questions: Is the "dream project" doable in JavaScript? Or is C++ just the best option because it does not take as long to be interpreted at run time like JavaScript? What are the limitations? I can think of some: Need of a Powerful server depending on how much information the server has to process. Internet Speed; Sending the data of the Voxel positions to every player could add up being very high! The browser FPS might go down quickly if rendering to many objects. One way of fixing reducing the packages Could be to let the browser calculate some of the Voxel positions from Several Values. But that would slow down the Client side too. What about the more achievable project? I am almost 100% convinced that this is possible in JavaScript, and that there are several ways of doing this. This is just XY position Updating for now.. Hope this did make some sense. Please comment if you got something to say :D

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. This will change though depending on the size of objects serialized - the larger the object the more processing time is spent inside the actual dynamically activated components and the less difference there will be. Dynamic code is always slower, but how much it really affects your application primarily depends on how frequently the dynamic code is called in relation to the non-dynamic code executing. In most situations where dynamic code is used 'to get the process rolling' as I do here the overhead is small enough to not matter.All that being said though - I serialized 10,000 objects in 80ms vs. 45ms so this is hardly slouchy performance. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?Dynamic loading is not something you need to worry about but on occasion dynamic loading makes sense. But there's a price to be paid in added code  and a performance hit which depends on how frequently the dynamic code is accessed. But for some operations that are not pivotal to a component or application and are only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files adding dependencies and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems like a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful option in your toolset… © Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How I might think like a hacker so that I can anticipate security vulnerabilities in .NET or Java before a hacker hands me my hat [closed]

    - by Matthew Patrick Cashatt
    Premise I make a living developing web-based applications for all form-factors (mobile, tablet, laptop, etc). I make heavy use of SOA, and send and receive most data as JSON objects. Although most of my work is completed on the .NET or Java stacks, I am also recently delving into Node.js. This new stack has got me thinking that I know reasonably well how to secure applications using known facilities of .NET and Java, but I am woefully ignorant when it comes to best practices or, more importantly, the driving motivation behind the best practices. You see, as I gain more prominent clientele, I need to be able to assure them that their applications are secure and, in order to do that, I feel that I should learn to think like a malevolent hacker. What motivates a malevolent hacker: What is their prime mover? What is it that they are most after? Ultimately, the answer is money or notoriety I am sure, but I think it would be good to understand the nuanced motivators that lead to those ends: credit card numbers, damning information, corporate espionage, shutting down a highly visible site, etc. As an extension of question #1--but more specific--what are the things most likely to be seeked out by a hacker in almost any application? Passwords? Financial info? Profile data that will gain them access to other applications a user has joined? Let me be clear here. This is not judgement for or against the aforementioned motivations because that is not the goal of this post. I simply want to know what motivates a hacker regardless of our individual judgement. What are some heuristics followed to accomplish hacker goals? Ultimately specific processes would be great to know; however, in order to think like a hacker, I would really value your comments on the broader heuristics followed. For example: "A hacker always looks first for the low-hanging fruit such as http spoofing" or "In the absence of a CAPTCHA or other deterrent, a hacker will likely run a cracking script against a login prompt and then go from there." Possibly, "A hacker will try and attack a site via Foo (browser) first as it is known for Bar vulnerability. What are the most common hacks employed when following the common heuristics? Specifics here. Http spoofing, password cracking, SQL injection, etc. Disclaimer I am not a hacker, nor am I judging hackers (Heck--I even respect their ingenuity). I simply want to learn how I might think like a hacker so that I may begin to anticipate vulnerabilities before .NET or Java hands me a way to defend against them after the fact.

    Read the article

  • ASP.NET Hosting :: ASP.NET File Upload Control

    - by mbridge
    The asp.net FileUpload control allows a user to browse and upload files to the web server. From developers perspective, it is as simple as dragging and dropping the FileUpload control to the aspx page. An extra control, like a Button control, or some other control is needed, to actually save the file. <asp:FileUploadID="FileUpload1"runat="server"/> <asp:ButtonID="B1"runat="server"Text="Save"OnClick="B1_Click"/> By default, the FileUpload control allows a maximum of 4MB file to be uploaded and the execution timeout is 110 seconds. These properties can be changed from within the web.config file’s httpRuntime section. The maxRequestLength property determines the maximum file size that can be uploaded. The executionTimeout property determines the maximum time for execution. <httpRuntimemaxRequestLength="8192"executionTimeout="220"/> From code behind, the mime type, size of the file, file name and the extension of the file can be obtained. The maximum file size that can be uploaded can be obtained and modified using the System.Web.Configuration.HttpRuntimeSection class. Files can be alternatively saved using the System.IO.HttpFileCollection class. This collection class can be populated using the Request.Files property. The collection contains HttpPostedFile class which contains a reference to the class. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.IO; using System.Configuration; using System.Web.Configuration;   namespace WebApplication1 {     public partial class WebControls : System.Web.UI.Page     {         protected void Page_Load(object sender, EventArgs e)         {         }           //Using FileUpload control to upload and save files         protected void B1_Click(object sender, EventArgs e)         {             if (FileUpload1.HasFile && FileUpload1.PostedFile.ContentLength > 0)             {                 //mime type of the uploaded file                 string mimeType = FileUpload1.PostedFile.ContentType;                   //size of the uploaded file                 int size = FileUpload1.PostedFile.ContentLength; // bytes                   //extension of the uploaded file                 string extension = System.IO.Path.GetExtension(FileUpload1.FileName);                                  //save file                 string path = Server.MapPath("path");                                 FileUpload1.SaveAs(path + FileUpload1.FileName);                              }             //maximum file size allowed             HttpRuntimeSection rt = new HttpRuntimeSection();             rt.MaxRequestLength = rt.MaxRequestLength * 2;             int length = rt.MaxRequestLength;                     //execution timeout             TimeSpan ts = rt.ExecutionTimeout;             double secomds = ts.TotalSeconds;           }           //Using Request.Files to save files         private void AltSaveFile()         {             HttpFileCollection coll = Request.Files;             for (int i = 0; i < coll.Count; i++)             {                 HttpPostedFile file = coll[i];                   if (file.ContentLength > 0)                     ;//do something             }         }     } }

    Read the article

  • Physics/Graphics Components

    - by Brett Powell
    I have spent the last 48 hours reading up on Object Component systems, and feel I am ready enough to start implementing it. I got the base Object and Component classes created, but now that I need to start creating the actual components I am a bit confused. When I think of them in terms of HealthComponent or something that would basically just be a property, it makes perfect sense. When it is something more general as a Physics/Graphics component, I get a bit confused. My Object class looks like this so far (If you notice any changes I should make please let me know, still new to this)... typedef unsigned int ID; class GameObject { public: GameObject(ID id, Ogre::String name = ""); ~GameObject(); ID &getID(); Ogre::String &getName(); virtual void update() = 0; // Component Functions void addComponent(Component *component); void removeComponent(Ogre::String familyName); template<typename T> T* getComponent(Ogre::String familyName) { return dynamic_cast<T*>(m_components[familyName]); } protected: // Properties ID m_ID; Ogre::String m_Name; float m_flVelocity; Ogre::Vector3 m_vecPosition; // Components std::map<std::string,Component*> m_components; std::map<std::string,Component*>::iterator m_componentItr; }; Now the problem I am running into is what would the general population put into Components such as Physics/Graphics? For Ogre (my rendering engine) the visible Objects will consist of multiple Ogre::SceneNode (possibly multiple) to attach it to the scene, Ogre::Entity (possibly multiple) to show the visible meshes, and so on. Would it be best to just add multiple GraphicComponent's to the Object and let each GraphicComponent handle one SceneNode/Entity or is the idea to have one of each Component needed? For Physics I am even more confused. I suppose maybe creating a RigidBody and keeping track of mass/interia/etc. would make sense. But I am having trouble thinking of how to actually putting specifics into a Component. Once I get a couple of these "Required" components done, I think it will make a lot more sense. As of right now though I am still a bit stumped.

    Read the article

  • MythTV lost recordings - "No recordings available" and no recording rules either

    - by nimasmi
    I have a c.6 year old mythtv database. I recently upgraded from Ubuntu 10.04 to 12.04. This brought a MythTV upgrade from 0.24 to 0.25, which went well. Today, all my recordings have disappeared. They still exist in the /var/lib/mythtv/recordings folder, and the 'M' key in the Watch Recordings page says that there are 201 recordings available somewhere, but they will not display. See screenshot: (implicit thanks to whomever upvoted this, giving me sufficient reputation to upload images) Changing the filter does not remedy the fact that there is nothing shown in the lists. My Upcoming Recordings screen says that there are no rules set, but my list of previously recorded shows is still there, and has an entry from as recently as 3am today. mythbackend --printsched gives the following: user@box:~$ mythbackend --printsched 2012-09-22 12:59:20.537008 C mythbackend version: fixes/0.25 [v0.25.2-15-g46cab93] www.mythtv.org 2012-09-22 12:59:20.537043 C Qt version: compile: 4.8.1, runtime: 4.8.1 2012-09-22 12:59:20.537048 N Enabled verbose msgs: general 2012-09-22 12:59:20.537076 N Setting Log Level to LOG_INFO 2012-09-22 12:59:20.537142 I Added logging to the console 2012-09-22 12:59:20.537152 I Added database logging to table logging 2012-09-22 12:59:20.537279 N Setting up SIGHUP handler 2012-09-22 12:59:20.537373 N Using runtime prefix = /usr 2012-09-22 12:59:20.537394 N Using configuration directory = /home/user/.mythtv 2012-09-22 12:59:20.537999 I Assumed character encoding: en_GB.UTF-8 2012-09-22 12:59:20.538599 N Empty LocalHostName. 2012-09-22 12:59:20.538610 I Using localhost value of box 2012-09-22 12:59:20.538792 I Testing network connectivity to '192.168.1.2' 2012-09-22 12:59:20.539420 I Starting process manager 2012-09-22 12:59:20.541412 I Starting IO manager (read) 2012-09-22 12:59:20.541715 I Starting IO manager (write) 2012-09-22 12:59:20.541836 I Starting process signal handler 2012-09-22 12:59:20.684497 N Setting QT default locale to EN_GB 2012-09-22 12:59:20.684694 I Current locale EN_GB 2012-09-22 12:59:20.684813 N Reading locale defaults from /usr/share/mythtv//locales/en_gb.xml 2012-09-22 12:59:20.697623 I New static DB connectionDataDirectCon 2012-09-22 12:59:20.704769 I MythCoreContext: Connecting to backend server: 192.168.1.2:6543 (try 1 of 1) Calculating Schedule from database. Inputs, Card IDs, and Conflict info may be invalid if you have multiple tuners. 2012-09-22 12:59:27.710538 E MythSocket(21dfcd0:14): readStringList: Error, timed out after 7000 ms. 2012-09-22 12:59:27.710592 C Protocol version check failure. The response to MYTH_PROTO_VERSION was empty. This happens when the backend is too busy to respond, or has deadlocked in due to bugs or hardware failure. Things I have tried so far: restart the backend restart the frontend run mythtv-setup and check database passwords and IP addresses change the frontend setting for backend IP from localhost to 192.168.1.2 (the backend/frontend's IP) run optimize_mythdb.pl Other suggestions appreciated.

    Read the article

  • Reinventing the Paged IEnumerable, Weigert Style!

    - by adweigert
    I am pretty sure someone else has done this, I've seen variations as PagedList<T>, but this is my style of a paged IEnumerable collection. I just store a reference to the collection and generate the paged data when the enumerator is needed, so you could technically add to a list that I'm referencing and the properties and results would be adjusted accordingly. I don't mind reinventing the wheel when I can add some of my own personal flare ... // Extension method for easy use public static PagedEnumerable AsPaged(this IEnumerable collection, int currentPage = 1, int pageSize = 0) { Contract.Requires(collection != null); Contract.Assume(currentPage >= 1); Contract.Assume(pageSize >= 0); return new PagedEnumerable(collection, currentPage, pageSize); } public class PagedEnumerable : IEnumerable { public PagedEnumerable(IEnumerable collection, int currentPage = 1, int pageSize = 0) { Contract.Requires(collection != null); Contract.Assume(currentPage >= 1); Contract.Assume(pageSize >= 0); this.collection = collection; this.PageSize = pageSize; this.CurrentPage = currentPage; } IEnumerable collection; int currentPage; public int CurrentPage { get { if (this.currentPage > this.TotalPages) { return this.TotalPages; } return this.currentPage; } set { if (value < 1) { this.currentPage = 1; } else if (value > this.TotalPages) { this.currentPage = this.TotalPages; } else { this.currentPage = value; } } } int pageSize; public int PageSize { get { if (this.pageSize == 0) { return this.collection.Count(); } return this.pageSize; } set { this.pageSize = (value < 0) ? 0 : value; } } public int TotalPages { get { return (int)Math.Ceiling(this.collection.Count() / (double)this.PageSize); } } public IEnumerator GetEnumerator() { var pageSize = this.PageSize; var currentPage = this.CurrentPage; var startCount = (currentPage - 1) * pageSize; return this.collection.Skip(startCount).Take(pageSize).GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return this.GetEnumerator(); } }

    Read the article

  • Class Design -- Multiple Calls from One Method or One Call from Multiple Methods?

    - by Andrew
    I've been working on some code recently that interfaces with a CMS we use and it's presented me with a question on class design that I think is applicable in a number of situations. Essentially, what I am doing is extracting information from the CMS and transforming this information into objects that I can use programatically for other purposes. This consists of two steps: Retrieve the data from the CMS (we have a DAL that I use, so this is essentially just specifying what data from the CMS I want--no connection logic or anything like that) Map the parsed data to my own [C#] objects There are basically two ways I can approach this: One call from multiple methods public void MainMethodWhereIDoStuff() { IEnumerable<MyObject> myObjects = GetMyObjects(); // Do other stuff with myObjects } private static IEnumerable<MyObject> GetMyObjects() { IEnumerable<CmsDataItem> cmsDataItems = GetCmsDataItems(); List<MyObject> mappedObjects = new List<MyObject>(); // do stuff to map the CmsDataItems to MyObjects return mappedObjects; } private static IEnumerable<CmsDataItem> GetCmsDataItems() { List<CmsDataItem> cmsDataItems = new List<CmsDataItem>(); // do stuff to get the CmsDataItems I want return cmsDataItems; } Multiple calls from one method public void MainMethodWhereIDoStuff() { IEnumerable<CmsDataItem> cmsDataItems = GetCmsDataItems(); IEnumerable<MyObject> myObjects = GetMyObjects(cmsDataItems); // do stuff with myObjects } private static IEnumerable<MyObject> GetMyObjects(IEnumerable<CmsDataItem> itemsToMap) { // ... } private static IEnumerable<CmsDataItem> GetCmsDataItems() { // ... } I am tempted to say that the latter is better than the former, as GetMyObjects does not depend on GetCmsDataItems, and it is explicit in the calling method the steps that are executed to retrieve the objects (I'm concerned that the first approach is kind of an object-oriented version of spaghetti code). On the other hand, the two helper methods are never going to be used outside of the class, so I'm not sure if it really matters whether one depends on the other. Furthermore, I like the fact that in the first approach the objects can be retrieved from one line-- most likely anyone working with the main method doesn't care how the objects are retrieved, they just need to retrieve the objects, and the "daisy chained" helper methods hide the exact steps needed to retrieve them (in practice, I actually have a few more methods but am still able to retrieve the object collection I want in one line). Is one of these methods right and the other wrong? Or is it simply a matter of preference or context dependent?

    Read the article

  • BizTalk 2009 - Custom Functoid Categories

    - by StuartBrierley
    I recently had cause to code a number of custom functoids to aid with some maps that I was writing. Once these were developed and deployed to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions a quick refresh allowed them to appear in toolbox.  After dropping these on a map and configuring the appropriate inputs I tested the map to check that they worked as expected.  All but one of the functoids worked as expecetd, but the final functoid appeared not to be firing at all. I had already tested the code used in a simple test harness application, so I was confident in the code used, but I still needed to figure out what the problem might be. Debugging the map helped me on the way; for some reason the functoid in question was not shown correctly - the functoid definition was wrong. After some investigations I found that the functoid type you assign when coding a custom functoid affects more than just the category it appears in; different functoid types have different capabilities, including what they can link too.  For example, a logical functoid can not provide content for an output element, it can only say whether the element exists.  Map this via a Value Mapping functoid and the value of true or false can be seen in the output element. The functoid I was having problems with was one whare I had used the XPath functoid type, this had seemed to be a good fit as I was looking up content in a config file using xpath and I wanted it to appear the advanced area.  From the table below you can see that this functoid type is marked as "Internal Only", preventing it from being used for custom functoids.  Changing my type to String allowed the functoid to function as expected. Category Description Toolbox Group Assert Internal Use Only Advanced Conversion Converts characters to and from numerics and converts numbers from one base to another. Conversion Count Internal Use Only Advanced Cumulative Performs accumulations of the value of a field that occurs multiple times in a source document and outputs a single output. Cumulative DatabaseExtract Internal Use Only Database DatabaseLookup Internal Use Only Database DateTime Adds date, time, date and time, or add days to a specified date, in output data. Date/Time ExistenceLooping Internal Use Only Advanced Index Internal Use Only Advanced Iteration Internal Use Only Advanced Keymatch Internal Use Only Advanced Logical Controls conditional behavior of other functoids to determine whether particular output data is created. Logical Looping Internal Use Only Advanced MassCopy Internal Use Only Advanced Math Performs specific numeric calculations such as addition, multiplication, and division. Mathematical NilValue Internal Use Only Advanced Scientific Performs specific scientific calculations such as logarithmic, exponential, and trigonometric functions. Scientific Scripter Internal Use Only Advanced String Manipulates data strings by using well-known string functions such as concatenation, length, find, and trim. String TableExtractor Internal Use Only Advanced TableLooping Internal Use Only Advanced Unknown Internal Use Only Advanced ValueMapping Internal Use Only Advanced XPath Internal Use Only Advanced Links http://msdn.microsoft.com/en-us/library/microsoft.biztalk.basefunctoids.functoidcategory(BTS.20).aspx http://blog.eliasen.dk/CommentView,guid,d33b686b-b059-4381-a0e7-1c56e808f7f0.aspx

    Read the article

  • Multiple render targets and gamma correctness in Direct3D9

    - by Mario
    Let's say in a deferred renderer when building your G-Buffer you're going to render texture color, normals, depth and whatever else to your multiple render targets at once. Now if you want to have a gamma-correct rendering pipeline and you use regular sRGB textures as well as rendertargets, you'll need to apply some conversions along the way, because your filtering, sampling and calculations should happen in linear space, not sRGB space. Of course, you could store linear color in your textures and rendertargets, but this might very well introduce bad precision and banding issues. Reading from sRGB textures is easy: just set SRGBTexture = true; in your texture sampler in your HLSL effect code and the hardware does the conversion sRGB-linear for you. Writing to an sRGB rendertarget is theoretically easy, too: just set SRGBWriteEnable = true; in your effect pass in HLSL and your linear colors will be converted to sRGB space automatically. But how does this work with multiple rendertargets? I only want to do these corrections to the color textures and rendertarget, not to the normals, depth, specularity or whatever else I'll be rendering to my G-Buffer. Ok, so I just don't apply SRGBTexture = true; to my non-color textures, but when using SRGBWriteEnable = true; I'll do a gamma correction to all the values I write out to my rendertargets, no matter what I actually store there. I found some info on gamma over at Microsoft: http://msdn.microsoft.com/en-us/library/windows/desktop/bb173460%28v=vs.85%29.aspx For hardware that supports Multiple Render Targets (Direct3D 9) or Multiple-element Textures (Direct3D 9), only the first render target or element is written. If I understand correctly, SRGBWriteEnable should only be applied to the first rendertarget, but according to my tests it doesn't and is used for all rendertargets instead. Now the only alternative seems to be to handle these corrections manually in my shader and only correct the actual color output, but I'm not totally sure, that this'll not have any negative impact on color correctness. E.g. if the GPU does any blending or filtering or multisampling after the Linear-sRGB conversion... Do I even need gamma correction in this case, if I'm just writing texture color without lighting to my rendertarget? As far as I know, I DO need it because of the texture filtering and mip sampling happening in sRGB space instead, if I don't correct for it. Anyway, it'd be interesting to hear other people's solutions or thoughts about this.

    Read the article

  • Hostapd - WLAN as AP

    - by BBK
    I'm trying to start hostapd but without success. I'm using Headless Ubuntu 11.10 oneiric 3.0.0-16-server x86_64. WLAN driver is rt2800usb and my wireless nic card TP-Link TL-WN727N supports AP mode as shows below: us0# ifconfig wlan0 wlan0 Link encap:Ethernet HWaddr 00:27:19:be:cd:b6 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) us0# lsusb Bus 003 Device 003: ID 148f:3070 Ralink Technology, Corp. RT2870/RT3070 Wireless Adapter us0# lshw -C network *-network:3 description: Wireless interface physical id: 4 bus info: usb@3:2 logical name: wlan0 serial: 00:27:19:be:cd:b6 capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2800usb driverversion=3.0.0-16-server firmware=0.29 link=no multicast=yes wireless=IEEE 802.11bgn us0# hostapd /etc/hostapd/hostapd.conf Configuration file: /etc/hostapd/hostapd.conf Could not read interface wlan0 # The int flags: No such device nl80211 driver initialization failed. ELOOP: remaining socket: sock=4 eloop_data=0xd3e4a0 user_data=0xd3ecc0 handler=0x433880 ELOOP: remaining socket: sock=6 eloop_data=0xd411f0 user_data=(nil) handler=0x43cc10 us0# cat /etc/hostapd/hostapd.conf ssid=Home interface=wlan0 # The interface name of the card #driver=rt2800usb driver=nl80211 macaddr_acl=0 ieee80211n=1 channel=1 hw_mode=g auth_algs=1 ignore_broadcast_ssid=0 wpa=2 wpa_passphrase=88888888 wpa_key_mgmt=WPA-PSK wpa_pairwise=TKIP rsn_pairwise=CCMP us0# iw list Wiphy phy0 Band 1: Capabilities: 0x172 HT20/HT40 Static SM Power Save RX Greenfield RX HT20 SGI RX HT40 SGI RX STBC 1-stream Max AMSDU length: 7935 bytes No DSSS/CCK HT40 Maximum RX AMPDU length 65535 bytes (exponent: 0x003) Minimum RX AMPDU time spacing: 2 usec (0x04) HT RX MCS rate indexes supported: 0-7, 32 TX unequal modulation not supported HT TX Max spatial streams: 1 HT TX MCS rate indexes supported may differ Frequencies: * 2412 MHz [1] (20.0 dBm) * 2417 MHz [2] (20.0 dBm) * 2422 MHz [3] (20.0 dBm) * 2427 MHz [4] (20.0 dBm) * 2432 MHz [5] (20.0 dBm) * 2437 MHz [6] (20.0 dBm) * 2442 MHz [7] (20.0 dBm) * 2447 MHz [8] (20.0 dBm) * 2452 MHz [9] (20.0 dBm) * 2457 MHz [10] (20.0 dBm) * 2462 MHz [11] (20.0 dBm) * 2467 MHz [12] (20.0 dBm) (passive scanning, no IBSS) * 2472 MHz [13] (20.0 dBm) (passive scanning, no IBSS) * 2484 MHz [14] (20.0 dBm) (passive scanning, no IBSS) Bitrates (non-HT): * 1.0 Mbps * 2.0 Mbps (short preamble supported) * 5.5 Mbps (short preamble supported) * 11.0 Mbps (short preamble supported) * 6.0 Mbps * 9.0 Mbps * 12.0 Mbps * 18.0 Mbps * 24.0 Mbps * 36.0 Mbps * 48.0 Mbps * 54.0 Mbps max # scan SSIDs: 4 Supported interface modes: * IBSS * managed * AP * AP/VLAN * WDS * monitor * mesh point Supported commands: * new_interface * set_interface * new_key * new_beacon * new_station * new_mpath * set_mesh_params * set_bss * authenticate * associate * deauthenticate * disassociate * join_ibss * Unknown command (68) * Unknown command (55) * Unknown command (57) * Unknown command (59) * Unknown command (67) * set_wiphy_netns * Unknown command (65) * Unknown command (66) * connect * disconnect The question is: Why the hostapd not starting?

    Read the article

  • Closing the Gap: 2012 IOUG Enterprise Data Security Survey

    - by Troy Kitch
    The new survey from the Independent Oracle Users Group (IOUG) titled "Closing the Security Gap: 2012 IOUG Enterprise Data Security Survey," uncovers some interesting trends in IT security among IOUG members and offers recommendations for securing data stored in enterprise databases. "Despite growing threats and enterprise data security risks, organizations that implement appropriate detective, preventive, and administrative safeguards are seeing significant results," finds the report's author, Joseph McKendrick, analyst, Unisphere Research. Produced by Unisphere Research and underwritten by Oracle, the report is based on responses from 350 IOUG members representing a variety of job roles, organization sizes, and industry verticals. Key findings include Corporate budgets increase, but trailing. Though corporate data security budgets are increasing this year, they still have room to grow to reach the previous year’s spending. Additionally, more than half of respondents say their organizations still do not have, or are unaware of, data security plans to help address contingencies as they arise. Danger of unauthorized access. Less than a third of respondents encrypt data that is either stored or in motion, and at the same time, more than three-fifths say they send actual copies of enterprise production data to other sites inside and outside the enterprise. Privileged user misuse. Only about a third of respondents say they are able to prevent privileged users from abusing data, and most do not have, or are not aware of, ways to prevent access to sensitive data using spreadsheets or other ad hoc tools. Lack of consistent auditing. A majority of respondents actively collect native database audits, but there has not been an appreciable increase in the implementation of automated tools for comprehensive auditing and reporting across databases in the enterprise. IOUG RecommendationsThe report's author finds that securing data requires not just the ability to monitor and detect suspicious activity, but also to prevent the activity in the first place. To achieve this comprehensive approach, the report recommends the following. Apply an enterprise-wide security strategy. Database security requires multiple layers of defense that include a combination of preventive, detective, and administrative data security controls. Get business buy-in and support. Data security only works if it is backed through executive support. The business needs to help determine what protection levels should be attached to data stored in enterprise databases. Provide training and education. Often, business users are not familiar with the risks associated with data security. Beyond IT solutions, what is needed is a well-engaged and knowledgeable organization to help make security a reality. Read the IOUG Data Security Survey Now.

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • Unrated Easy iOS 6.1.4/6.1.3 Unlock/Jailbreak iPhone 5/4S/4/3GS Untehtered System

    - by user171772
    Popular jailbreak tool Unlock-Jailbreak.net – compiled by the iPhone Team – has just been updated with full support for Unlock/Jailbreak iPhone 5/4S/4/3GS iOS 6.1.4 and 6.1.3/6.0.1 Untethered. You may have caught our tutorial, which detailed how one could jailbreak their device tethered using Redsn0w, although since it was a pre-iOS 6.1.1 release, users needed to "point" the tool to the older firmware. Team Unlock-Jailbreak was established few years ago, combines some of the jailbreak and unlock community’s most talented developers all known for producing reliable jailbreaks in the past. This team was assembled in order to develop a reliable untethered jailbreak and unlock iphone 5,4S,4 iOS 6.1 for post-A5 devices, including the iPhone 5, the iPad mini and the latest-generation iPad. This has now been achieved with the just-released userland jailbreak tool, known as Unlock-Jailbreak.net. To Jailbreak and Unlock your iPhone 5/4/4S/3GS iOS 6.1.4 and 6.1.3 visit the official website http://www.Unlock-Jailbreak.net http://www.Unlock-Jailbreak.net was formed in mid 2008 and have successfully jailbroken over 250,000 iPhones worldwide. This is unparalleled by any other service in the industry. They have achieved this by combining a very simple solution with a fantastic customer service department that is available 24/7 through many forms of contact, including telephone. Unlock-Jailbreak from Unlock-Jailbreak.nethas been downloaded by over 250,000 customers located in over 145 countries. To further ensure customers of its products usability, Unlock-Jailbreak offers a 100% full money back guarantee on all orders. Customers dissatisfied with the company’s product will be given a full refund, no questions asked. One good advantage of the software is that the jailbreaking and unlocking process is coampletely reversible and there will be no evidence that the iPhone has been jailbroken and unlocked . iOS 6.1/6.1.4 and 6.1.3 comes with many new features and updates for multitasking and storage. By unlocking and jailbreaking the iPhone,Unlock/Jailbreak iPhone 5/4S/4/3GS iOS 6.1/6.1.4 and 6.1.3/6.0.1 Untethered unleash unlimited possibilities to improve this already fantastic experience and the iPhone FULL potential. Before going through any jailbreak process with Unlock-Jailbreak it is always good housekeeping to perform a full backup of all information on the device. It is unlikely that anything will go wrong during the process but when undertaking any process that modifies the internals of a file system it is always prudent to err on the side of caution.

    Read the article

  • Attunity Oracle CDC Solution for SSIS - Beta

    We in no way work for Attunity but we were asked to test drive a beta version of their Oracle CDC solution for SSIS.  Everybody should know that moving more data than you need to takes too much time and uses resources that may better be employed doing something else.  Change data Capture is a technology that is designed to help you identify only the data that has had something done to it and you can therefore move only what is needed.  Microsoft have implemented this exact functionality into SQL server 2008 and I really like it there.  Attunity though are doing it on Oracle. DISCLAIMER: This is a BETA release and some of the parts are a bit ugly/difficult to work with.  The idea though is definitely right and the product once working does exactly what it says on the tin.  They have always been helpful to me when I have had a problem with the product and if that continues then beta testing pain should be eased somewhat. In due course I am going to be doing some videos around me using the product.  If you use Oracle and SSIS then give it a go. Here is their product description.   Attunity is a Microsoft SQL Server technology partner and the creator of the Microsoft Connectors for Oracle and Teradata, currently available in SQL Server 2008 Enterprise Edition. Attunity released a beta version of the Attunity Oracle-CDC for SSIS, a product that integrates continually changing Oracle data into SSIS, efficiently and in real-time. Attunity designed the product and integrated it into SSIS to create the simple creation of change data capture (CDC) solutions, accelerate implementation time, and reduce resources and costs. They also utilize log-based CDC so the solution has minimal impact on the Oracle source system. You can use the product to implement enterprise-class data replication, synchronization, and real-time business intelligence (BI) and data warehousing projects, quickly and efficiently, leveraging their existing SQL Server investments and resource skills. Attunity architected the product specifically for the Microsoft SSIS developer community and the product is available for both SQL Server 2005 and SQL Server 2008. It offers the following key capabilities: · Log-based, non-intrusive Oracle CDC · Full integration into SSIS and the Business Intelligence Developer Studio · Automatic generation of SSIS packages for CDC as well as full-loads of Oracle data · Filtering of Oracle tables and columns at the source · Monitoring and control of CDC processing Click to learn more and download the beta.

    Read the article

  • How to sort a ListView control by a column in Visual C#

    - by bconlon
    Microsoft provide an article of the same name (previously published as Q319401) and it shows a nice class 'ListViewColumnSorter ' for sorting a standard ListView when the user clicks the column header. This is very useful for String values, however for Numeric or DateTime data it gives odd results. E.g. 100 would come before 99 in an ascending sort as the string compare sees 1 < 9. So my challenge was to allow other types to be sorted. This turned out to be fairly simple as I just needed to create an inner class in ListViewColumnSorter which extends the .Net CaseInsensitiveComparer class, and then use this as the ObjectCompare member's type. Note: Ideally we would be able to use IComparer as the member's type, but the Compare method is not virtual in CaseInsensitiveComparer , so we have to create an exact type: public class ListViewColumnSorter : IComparer {     private CaseInsensitiveComparer ObjectCompare;     private MyComparer ObjectCompare;     ... rest of Microsofts class implementation... } Here is my private inner comparer class, note the 'new int Compare' as Compare is not virtual, and also note we pass the values to the base compare as the correct type (e.g. Decimal, DateTime) so they compare correctly: private class MyComparer : CaseInsensitiveComparer {     public new int Compare(object x, object y)     {         try         {             string s1 = x.ToString();             string s2 = y.ToString();               // check for a numeric column             decimal n1, n2 = 0;             if (Decimal.TryParse(s1, out n1) && Decimal.TryParse(s2, out n2))                 return base.Compare(n1, n2);             else             {                 // check for a date column                 DateTime d1, d2;                 if (DateTime.TryParse(s1, out d1) && DateTime.TryParse(s2, out d2))                     return base.Compare(d1, d2);             }         }         catch (ArgumentException) { }           // just use base string compare         return base.Compare(x, y);     } } You could extend this for other types, even custom classes as long as they support ICompare. Microsoft also have another article How to: Sort a GridView Column When a Header Is Clicked that shows this for WPF, which looks conceptually very similar. I need to test it out to see if it handles non-string types. #

    Read the article

< Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >