Search Results

Search found 1334 results on 54 pages for 'constraint satisfaction'.

Page 26/54 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • UK Pilot Event: Fusion Applications Release 8 Simplified UI: Extensibility & Customization of User Experience

    - by ultan o'broin
    Interested? Of course you are! But read on to understand the what, why, where, and the who and ensure this great opportunity is right for you before signing up. There will be some demand for this one, so hurry! What: A one-day workshop where Applications User Experience will preview the proposed content for communicating the user experience (UX) tool kit intended for the next release of Oracle Fusion Applications. We will walk through the content, explain our approach and tell you about our activities for communicating to partners and customers how to customize and extend their Release 8 user experiences for Oracle Fusion Applications with composers and the Oracle Application Development Framework Toolkit. When and Where: Dec. 11, 2013 @ Oracle UK in Thames Valley Park Again: This event is held in person in the UK. So ensure you can travel! Why: We are responding to Oracle partner interest about extending and customizing Simplified UIs for Release 7, and we will be use the upcoming release as our springboard for getting a powerful productivity and satisfaction message out to the Oracle ADF enterprise methodology development community, Fusion customer implementation and tailoring teams and to our Oracle partner ecosystem. This event will also be an opportunity for attendees to give Oracle feedback on the approach too, ensuring our messaging and resources meets your business needs or if there is something else needed to get up and running fast! Who: The ideal participants for this workshop are who will be involved in system implementation roles for HCM and CRM Oracle Fusion Applications Release 8, as well as seasoned ADF developers supporting Oracle Fusion Applications. And yes, Cloud is part of the agenda! How to Register: Use this URL: http://bit.ly/UXEXTUK13 If you have questions, then send them along right away to [email protected]. Deadline: Please RSVP by November 1, 2013.

    Read the article

  • Player sprite moving slower on iPhone 4

    - by nvillec
    I just finished getting movement/jump animation for a player sprite in Xcode using Cocos2D. The basic movement algorithm is a timer that updates every 0.01 sec, changing the sprite position to (sprite.position.x + xVel, sprite.position.y + yVel). Each time a movement button is tapped, the appropriate velocity (initialized to 0) is changed to whatever speed I choose, then a stop movement button returns the velocity to 0. It's not an ideal solution but I'm very new at this and stoked to at least have that working with little help from the internet. So I may not have explained that perfectly, but it is in fact working to my satisfaction in Xcode's iPhone Simulator, however when I build it for my device and run it on my phone, the sprite's movement speed is noticeably slower than in Xcode. At first I thought it must have to do with the resolution of the iPhone 4, making the sprite's movement path twice as long, but I found that if I pull up the multitask bar, then return to the app the speed will sometimes jump back to normal. My second theory was that the code is just inefficient and is bogging the processes down, but I would see this reflected in the frame rate wouldn't I? It stays at 59-60 the whole time, and the spritesheet animation runs at the correct speed. Has anyone experienced this? Is this a really obvious issue that I'm completely missing? Any help (or tips for optimizing my approach to movement) would be much appreciated!

    Read the article

  • Oracle Exalogic Customer Momentum @ OOW'12

    - by Sanjeev Sharma
    [Adapted from here]  At Oracle Open World 2012, i sat down with some of the Oracle Exalogic early adopters  to discuss the business benefits these businesses were realizing by embracing the engineered systems approach to data-center modernization and application consolidation. Below is an overview of the 4 businesses that won the Oracle Fusion Middleware Innovation Award for Oracle Exalogic this year. Company: Netshoes About: Leading online retailer of sporting goods in Latin America.Challenges: Rapid business growth resulted in frequent outages and poor response-time of online store-front Conventional ad-hoc approach to horizontal scaling resulted in high CAPEX and OPEX Poor performance and unavailability of online store-front resulted in revenue loss from purchase abandonment Solution: Consolidated ATG Commerce and Oracle WebLogic running on Oracle Exalogic.Business Impact:Reduced abandonment rates resulting in a two-digit increase in online conversion rates translating directly into revenue up-liftCompany: ClaroAbout: Leading communications services provider in Latin America.Challenges: Support business growth over the next 3  - 5 years while maximizing re-use of existing middleware and application investments with minimal effort and risk Solution: Consolidated Oracle Fusion Middleware components (Oracle WebLogic, Oracle SOA Suite, Oracle Tuxedo) and JAVA applications onto Oracle Exalogic and Oracle Exadata. Business Impact:Improved partner SLA’s 7x while improving throughput 5X and response-time 35x for  JAVA applicationsCompany: ULAbout: Leading safety testing and certification organization in the world.Challenges: Transition from being a non-profit to a profit oriented enterprise and grow from a $1B to $5B in annual revenues in the next 5 years Undertake a massive business transformation by aligning change strategy with execution Solution: Consolidated Oracle Applications (E-Business Suite, Siebel, BI, Hyperion) and Oracle Fusion Middleware (AIA, SOA Suite) on Oracle Exalogic and Oracle ExadataBusiness Impact:Reduced financial and operating risk in re-architecting IT services to support new business capabilities supporting 87,000 manufacturersCompany: Ingersoll RandAbout: Leading manufacturer of industrial, climate, residential and security solutions.Challenges: Business continuity risks due to complexity in enforcing consistent operational and financial controls; Re-active business decisions reduced ability to offer differentiation and compete Solution: Consolidated Oracle E-business Suite on Oracle Exalogic and Oracle ExadataBusiness Impact:Service differentiation with faster order provisioning and a shorter lead-to-cash cycle translating into higher customer satisfaction and quicker cash-conversionCheck out the winners of the Oracle Fusion Middleware Innovation awards in other categories here.

    Read the article

  • When to use each user research method

    - by user12277104
    There are a lot of user research methods out there, but sometimes we get stuck in a rut, conducting all formative usability testing before coding, or running surveys to gather satisfaction data. I'll be the first to admit that it happens to me, but to get out of a rut, it just takes a minute to look at where I am in the design & development cycle, what kind(s) of data I need, and what methods are available to me. We need reminders, or refreshers, every once in a while. One tool I've found useful is a graphic organizer that I created many years ago. It's been through several revisions, as I've adapted it to the product cycles of the places I've worked, changed my mind about how to categorize it, and added methods that I've used or created over time. I shared a version of this table at the 2012 International UPA conference, and I was contacted by someone yesterday who wanted to use it in a university course on user-center design. I was flattered at the the thought, but embarrassed, because I was sure it needed updating -- that was a year ago, after all. But I opened it today, and really, there's not much I'd change -- sure, I could add some nuance regarding what types of formative testing, such as modality (remote, unmoderated remote, or in-person) or flavor of testing (RITE, RITE-Krug, comparative, performance), but I think it's pretty much ok as is. Click on the image below, to get the full-size PDF. And whether it's entirely "right" or "wrong" isn't the whole value of looking at these methods across the product lifecycle. The real value lies in the reminder that I have options. And what those options are change as the field changes, so while I don't expect this graphic to have an eternal shelf life, it's still ok a year after I last updated it. That said, if you find something missing or out of place, let me know :) 

    Read the article

  • Is there a LOGO interpreter that actually has a turtle?

    - by Tim Post
    This is not a repeat of the now infamous "How do I move the turtle in LOGO?" Recently, I had the following conversation with my five year old daughter: Daughter: Daddy, do you write programs? Me: Yes! Daughter: Daddy, what's a program? Me: A program is a set of instructions that a computer follows. Daughter: Daddy, can I write a program too? Me: Sure! This got me scrambling to think of a very basic language that a five year old could get some satisfaction from mastering rather quickly. I'm ashamed to admit that the first thing that came to mind was this: 10 INPUT "Tell me a secret" A$ 20 PRINT "Wow really? :" A$ 30 GOTO 10 That isn't going to hold a five year old's attention for very long and it requires too much of a lecture. However, moving a turtle around and drawing neat pictures might just work. Sadly, my search for a LOGO interpreter yielded noting but ad ridden sites, flight simulators and a whole bunch of other stuff that I really don't want. I'm hoping to find a cross platform (Java / Python) LOGO interpreter (dare I call it simulator?) with the following features: Can save / replay commands (stored programs) Has an actual turtle Sound effects are a plus Have you stumbled across something like this, if so, can you provide a link? I hate to ask a 'shopping' sort of question, but it seemed much better than "Is LOGO appropriate for a five year old?"

    Read the article

  • The MDM Journey: From the Customer Perspective

    - by Mala Narasimharajan
    Master Data Management is more than just about a single version of  the truth or providing a 360 degree view of the customer.  It spans multiple domains ranging from customers to suppliers to products and beyond.  MDM is pivotal to providing a solid customer experience - one that results in repeat business, continued loyalty and last but not least - high customer satisfaction.  Customer experience is not only defined as accurate information about the customer for the enterprise, but also presenting the customer with the right information about products, orders, product availability, etc.   Let's take a look at a couple of customer use cases with Oracle MDM. Below is a picture from a recent customer panel: Oracle MDM is a key platform for increasing upsell/cross-sell opportunities, improve targeting of customers and uncover new sales opportunies, reduce inaccuracies in mailing marketing materials to prospects, as well as to tap into and uncover the full value of a customer across business units more accurately.  A leading investment and private bank leverages Oracle MDM to do a better job of identifying clients, their levels of investment as well as consistently manage them through a series of areas such as credit, risk, new accounts, etc. Ultimately, they are looking to understand client investments and touchpoints across the company's offerings.  Another use case for Oracle MDM is with a major financial and insurance services company with clients worldwide, looking to resolve customer data inaccuracies and client information stored differently across mulitiple systems.  For more information on Oracle Master Data Management, click here.  

    Read the article

  • Be Prepared: Technology Trends Converge and Disrupt

    - by Richard Lefebvre
    Cloud. Big data. Mobile. Social media: these mega trends in technology have had a profound impact on our lives. And now according to SVP Ravi Puri from North America Oracle Consulting Services, these trends are starting to converge and will affect us even more. His article, “Cloud, Analytics, Mobile, And Social: Convergence Will Bring Even More Disruption” appeared in Forbes on June 6. For example, mobile and social are causing huge changes in the business world. Big data and cloud are coming together to help us with deep analytical insights. And much more. These convergences are causing another wave of disruption, which can drive all kinds of improvements in such things as customer satisfaction, competitive advantage, and growth. But, according to Puri, companies need to be prepared. In this article, Puri urges companies to get out in front of the new innovations. H3 gives good directions on how to do so to accelerate time to value and minimize risk. The post is a good thought leadership piece to pass on to your customers.

    Read the article

  • Customer Experience Management for Retail 2.0 - part 2 / 2

    - by Sanjeev Sharma
    In the previous post, i discussed some of the key trends shaping up in the retail industry, their implications and the challenges facing retailers seeking to regain control of the buyer-seller relationship. Is Customer Experience Management the panacea for the ailing retailers who are now awakening to the power of the consumer? Quite honestly, customer acquisition, retention and satisfaction have been top of mind for retailers for quite some time now. The missing piece of this puzzle is bringing all those countless hours of strategy and planning to fruition. This is more of an execution gap than anything else. Although technology has made consumers more informed, more mobile and more social, customer experience is still largely defined by delivering on the following: Consistent experiences, whether shopping online or offline Personalize-able interaction ("mass market" sounds good as an internal strategy but not when you are a buyer!) Timely order fulfillment, if not pro-active notification of delays Below is a concept architecture for streamlining front-end, mid-office and back-end interfaces through shared process to achieve consistency and efficiency in managing the customer experience from order capture to order provisioning.

    Read the article

  • Successful Fusion CRM Bootcamp in Paris - July 24-24th

    - by Richard Lefebvre
    The first Fusion CRM Bootcamp for EMEA partners successfully took place in the Paris Pullmann Bercy hotel on July 24-26th. The agenda covered 14 Fusion CRM topics in depth, including detailed presentations and hands-on exercises, delivered by a team of Fusion CRM experts from Oracle Product Development. 89 participants represented 55 companies from 14 different countries, attended this event which was also a great opportunity to network with Oracle Product Development and Alliances & Channels executives during the breaks and the "Fusion Lounge" session each day after the training. As expressed by the participants in the event survey, the overall satisfaction reached to an impressive percentage of 85+ with the response of “met or exceeded the expectations” and with individual comments such as: On top of the presentation of Fusion CRM as a product, this event allowed to better understand Oracle's product and rollout strategy. The ability to meet the development team was really a bonus. Extremely valuable information given that enables integrators to go on the road of Fusion CRM Excellent organization, good product information coverage and demonstration Additional Fusion CRM bootcamps are planed across EMEA in the next quarters, although they will probably be under a different format which is still to be defined.

    Read the article

  • Oracle Buys BigMachines - Adds Leading Configure, Price and Quote (CPQ) Cloud to the Oracle Cloud to Enable Smarter Selling

    - by Richard Lefebvre
    News Facts Oracle today announced that it has entered into an agreement to acquire BigMachines, a leading cloud-based Configure, Price and Quote (CPQ) solution provider. BigMachines’ CPQ Cloud accelerates the conversion of sales opportunities into revenue by automating the sales order process with guided selling, dynamic pricing, and an easy-to-use workflow approval process, accessible anywhere, on any device. Companies that use sales automation technology often rely on manual, cumbersome and disconnected processes to convert opportunities into orders. This creates errors, adds costs, delays revenue, and degrades the customer experience. BigMachines’ CPQ cloud extends sales automation to include the creation of an optimal quote, which enables sales personnel to easily configure and price complex products, select the best options, promotions and deal terms, and include up sell and renewals, all using automated workflows. In combination with Oracle’s enterprise-grade cloud solutions, including Marketing, Sales, Social, Commerce and Service Clouds, Oracle and BigMachines will create an end-to-end smarter selling cloud solution so sales personnel are more productive, customers are more satisfied, and companies grow revenue faster. More information on this announcement can be found at http://www.oracle.com/bigmachines Supporting Quotes “The fundamental goals of smarter selling are to provide sales teams with the information, access, and insights they need to maximize revenue opportunities and execute on all phases of the sales cycle,” said Thomas Kurian, Executive Vice President, Oracle Development. “By adding BigMachines’ CPQ Cloud to the Oracle Cloud, companies will be able to drive more revenue and increase customer satisfaction with a seamlessly integrated process across marketing and sales, pricing and quoting, and fulfillment and service.” “BigMachines has developed leading CPQ solutions that serve companies of all sizes across multiple industries,” said David Bonnette, BigMachines’ CEO. “Together with Oracle, we expect to provide a complete cloud solution to manage sales processes and deliver exceptional customer experiences.” Supporting Resources About Oracle and BigMachines General Presentation Customer and Partner Letter FAQ

    Read the article

  • How to divide work to a network of computers?

    - by Morpork
    Imagine a scenario as follows: Lets say you have a central computer which generates a lot of data. This data must go through some processing, which unfortunately takes longer than to generate. In order for the processing to catch up with real time, we plug in more slave computers. Further, we must take into account the possibility of slaves dropping out of the network mid-job as well as additional slaves being added. The central computer should ensure that all jobs are finished to its satisfaction, and that jobs dropped by a slave are retasked to another. The main question is: What approach should I use to achieve this? But perhaps the following would help me arrive at an answer: Is there a name or design pattern to what I am trying to do? What domain of knowledge do I need to achieve the goal of getting these computers to talk to each other? (eg. will a database, which I have some knowledge of, be enough or will this involve sockets, which I have yet to have knowledge of?) Are there any examples of such a system? The main question is a bit general so it would be good to have a starting point/reference point. Note I am assuming constraints of c++ and windows so solutions pointing in that direction would be appreciated.

    Read the article

  • Software Architecture: How to divide work to a network of computers?

    - by Morpork
    Imagine a scenario as follows: Lets say you have a central computer which generates a lot of data. This data must go through some processing, which unfortunately takes longer than to generate. In order for the processing to catch up with real time, we plug in more slave computers. Further, we must take into account the possibility of slaves dropping out of the network mid-job as well as additional slaves being added. The central computer should ensure that all jobs are finished to its satisfaction, and that jobs dropped by a slave are retasked to another. The main question is: What approach should I use to achieve this? But perhaps the following would help me arrive at an answer: Is there a name or design pattern to what I am trying to do? What domain of knowledge do I need to achieve the goal of getting these computers to talk to each other? (eg. will a database, which I have some knowledge of, be enough or will this involve sockets, which I have yet to have knowledge of?) Are there any examples of such a system? The main question is a bit general so it would be good to have a starting point/reference point. Note I am assuming constraints of c++ and windows so solutions pointing in that direction would be appreciated.

    Read the article

  • Drag and drop in as3 on specific path (ex bezier curve)

    - by abinop
    I need to implement a drag and drop functionality, where I can define and constraint the route of the draggable object. Like in http://www.kirupa.com/forum/showthread.php?t=330302 , only that i have the paths designed and not calculated by a math function. So, in fact, as mouse moves I need to tell the object to follow the custom path/movieclip.

    Read the article

  • subsonic.migrations and Oracle XE

    - by andrecarlucci
    Hello, Probably I'm doing something wrong but here it goes: I'm trying to create a database using subsonic.migrations in an OracleXE version 10.2.0.1.0. I have ODP v 10.2.0.2.20 installed. This is my app.config: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="SubSonicService" type="SubSonic.SubSonicSection, SubSonic" requirePermission="false"/> </configSections> <connectionStrings> <add name="test" connectionString="Data Source=XE; User Id=test; Password=test;"/> </connectionStrings> <SubSonicService defaultProvider="test"> <providers> <clear/> <add name="test" type="SubSonic.OracleDataProvider, SubSonic" connectionStringName="test" generatedNamespace="testdb"/> </providers> </SubSonicService> </configuration> And that's my first migration: public class Migration001_Init : Migration { public override void Up() { //Create the records table TableSchema.Table records = CreateTable("asdf"); records.AddColumn("RecordName"); } public override void Down() { DropTable("asdf"); } } When I run the sonic.exe, I get this exception: Setting ConfigPath: 'App.config' Building configuration from D:\Users\carlucci\Documents\Visual Studio 2008\Projects\Wum\Wum.Migration\App.config Adding connection to test ERROR: Trying to execute migrate Error Message: System.Data.OracleClient.OracleException: ORA-02253: especifica‡Æo de restri‡Æo nÆo permitida aqui at System.Data.OracleClient.OracleConnection.CheckError(OciErrorHandle errorHandle, Int32 rc) at System.Data.OracleClient.OracleCommand.Execute(OciStatementHandle statementHandle, CommandBehavior behavior, Boolean needRowid, OciRowidDescriptor& rowidDescriptor, ArrayList& resultParameterOrdinals) at System.Data.OracleClient.OracleCommand.ExecuteNonQueryInternal(Boolean needRowid, OciRowidDescriptor& rowidDescriptor) at System.Data.OracleClient.OracleCommand.ExecuteNonQuery() at SubSonic.OracleDataProvider.ExecuteQuery(QueryCommand qry) in D:\@SubSonic\SubSonic\SubSonic\DataProviders\OracleDataProvider.cs:line 350 at SubSonic.DataService.ExecuteQuery(QueryCommand cmd) in D:\@SubSonic\SubSonic\SubSonic\DataProviders\DataService.cs:line 544 at SubSonic.Migrations.Migrator.CreateSchemaInfo(String providerName) in D:\@SubSonic\SubSonic\SubSonic.Migrations\Migrator.cs:line 249 at SubSonic.Migrations.Migrator.GetCurrentVersion(String providerName) in D:\@SubSonic\SubSonic\SubSonic.Migrations\Migrator.cs:line 232 at SubSonic.Migrations.Migrator.Migrate(String providerName, String migrationDirectory, Nullable`1 toVersion) in D:\@SubSonic\SubSonic\SubSonic.Migrations\Migrator.cs:line 50 at SubSonic.SubCommander.Program.Migrate() in D:\@SubSonic\SubSonic\SubCommander\Program.cs:line 264 at SubSonic.SubCommander.Program.Main(String[] args) in D:\@SubSonic\SubSonic\SubCommander\Program.cs:line 90 Execution Time: 379ms What am I doing wrong? Thanks a lot for any help :) Andre Carlucci UPDATE: As pointed by Anton, the problem is the subsonic OracleSqlGenerator. It is trying to create the schema table using this sql: CREATE TABLE SubSonicSchemaInfo ( version int NOT NULL CONSTRAINT DF_SubSonicSchemaInfo_version DEFAULT (0) ) Which doesn't work on oracle. The correct sql would be: CREATE TABLE SubSonicSchemaInfo ( version int DEFAULT (0), constraint DF_SubSonicSchemaInfo_version primary key (version) ) The funny thing is that since this is the very first sql executed by subsonic migrations, NOBODY EVER TESTED it on oracle.

    Read the article

  • Oracle Trigger to persist value

    - by Jose Jose
    I have a column that I need to be able to guarantee never gets set to anything other than "N" - I thought a trigger would be the perfect solution for this, but I can't seem to figure out how to make it so that anytime the column gets set to something other than "N" I reset it back to "N" Any pointers? EDIT: I wouldn't want to do a constraint because the application that will potentially change it to Y is outside of my control and I don't want it to be getting errors when it sets it to Y, I just want to passively set it back to N without fanfare.

    Read the article

  • Opinions on sensor / reading / alert database design

    - by Mark
    I've asked a few questions lately regarding database design, probably too many ;-) However I beleive I'm slowly getting to the heart of the matter with my design and am slowly boiling it down. I'm still wrestling with a couple of decisions regarding how "alerts" are stored in the database. In this system, an alert is an entity that must be acknowledged, acted upon, etc. Initially I related readings to alerts like this (very cut down) : - [Location] LocationId [Sensor] SensorId LocationId UpperLimitValue LowerLimitValue [SensorReading] SensorReadingId Value Status Timestamp [SensorAlert] SensorAlertId [SensorAlertReading] SensorAlertId SensorReadingId The last table is associating readings with the alert, because it is the reading that dictate that the sensor is in alert or not. The problem with this design is that it allows readings from many sensors to be associated with a single alert - whereas each alert is for a single sensor only and should only have readings for that sensor associated with it (should I be bothered that the DB allows this though?). I thought to simplify things, why even bother with the SensorAlertReading table? Instead I could do this: [Location] LocationId [Sensor] SensorId LocationId [SensorReading] SensorReadingId SensorId Value Status Timestamp [SensorAlert] SensorAlertId SensorId Timestamp [SensorAlertEnd] SensorAlertId Timestamp Basically I'm not associating readings with the alert now - instead I just know that an alert was active between a start and end time for a particular sensor, and if I want to look up the readings for that alert I can do. Obviously the downside is I no longer have any constraint stopping me deleting readings that occurred during the alert, but I'm not sure that the constraint is neccessary. Now looking in from the outside as a developer / DBA, would that make you want to be sick or does it seem reasonable? Is there perhaps another way of doing this that I may be missing? Thanks. EDIT: Here's another idea - it works in a different way. It stores each sensor state change, going from normal to alert in a table, and then readings are simply associated with a particular state. This seems to solve all the problems - what d'ya think? (the only thing I'm not sure about is calling the table "SensorState", I can't help think there's a better name (maybe SensorReadingGroup?) : - [Location] LocationId [Sensor] SensorId LocationId [SensorState] SensorStateId SensorId Timestamp Status IsInAlert [SensorReading] SensorReadingId SensorStateId Value Timestamp There must be an elegant solution to this!

    Read the article

  • Multiple Columns as Primary keys

    - by rockbala
    CREATE TABLE Persons ( P_Id int NOT NULL, LastName varchar(255) NOT NULL, FirstName varchar(255), Address varchar(255), City varchar(255), CONSTRAINT pk_PersonID PRIMARY KEY (P_Id,LastName) ) The above example is taken from w3schools. From the above example, my understanding is that both P_Id, LastName together represents a Primary Key for the table Persons, correct ? Another question is why would some one want to use multiple columns as Primary keys instead of a single column ? How many such columns can be used in together as Primary key in a given table ? Thanks Balaji S

    Read the article

  • Django unique_together error and validation

    - by zubinmehta
    class Votes(models.Model): field1 = models.ForeignKey(Blah1) field2 = models.ForeignKey(Blah2) class Meta: unique_together = (("field1","field2"),) I am using this code as one of my models. Now i wanted to know two things: 1. It doesn't show any error and it saved an entry which wasn't unique together; So is the piece of code correct? 2. How can the unique_together constraint be be validated?

    Read the article

  • convert a number to the shortest possible character string while retaining uniqueness

    - by alumb
    I have a list of digits, say "123456", and I need to map it to a string, any string. The only constraint on the map functions are: each list of digits must map to a unique character string (this means the string can be arbitrarily long) character string can only contain 0-9, a-z, A-Z What map function would produce the shortest strings? Solutions in JavaScript are preferred. note: Clearly the simplest solution is to use the original list of digits, so make sure you solution does better than that.

    Read the article

  • error in mysql syntax

    - by fusion
    while executing the following query, i get an error that there's an error in the syntax near line 9. since i'm using mysql workbench, i can't really figure out what could be wrong: CREATE TABLE IF NOT EXISTS `proquotes`.`thquotes` ( `idQuotes` INT NOT NULL AUTO_INCREMENT , `vAuthorID` VARCHAR(8) CHARACTER SET 'utf8' NOT NULL , `vAuthor` VARCHAR(45) CHARACTER SET 'utf8' NOT NULL , `cQuotes` MEDIUMTEXT CHARACTER SET 'utf8' NOT NULL , `cArabic` MEDIUMTEXT CHARACTER SET 'utf8' NOT NULL , `vReference` VARCHAR(100) CHARACTER SET 'utf8' NOT NULL , PRIMARY KEY (`idQuotes`) , INDEX `vAuthorID` () , CONSTRAINT `vAuthorID` FOREIGN KEY () REFERENCES `proquotes`.`author_info` () ON DELETE NO ACTION ON UPDATE NO ACTION) DEFAULT CHARACTER SET = utf8;

    Read the article

  • Web Safe Area (optimal resolution) for web app design

    - by M.A.X
    I'm in the process of designing a new web app and I'm wondering for what 'web safe area' should I optimize the app layout and design. I did some investigation and thinking on my own but wanted to share this to see what the general opinion is. Here is what I found: Optimal Display Resolution: w3schools web stats seems to be the most referenced source (however they state that these are results from their site and is biased towards tech savvy users) http://www.w3counter.com/globalstats.php (aggregate data from something like 15,000 different sites that use their tracking services) StatCounter Global Stats Display Resolution (Stats are based on aggregate data collected by StatCounter on a sample exceeding 15 billion pageviews per month collected from across the StatCounter network of more than 3 million websites) NetMarketShare Screen Resolutions (marketshare.hitslink.com) (a web analytics consulting firm, they get data from browsers of site visitors to their on-demand network of live stats customers. The data is compiled from approximately 160 million visitors per month) Display Resolution Summary: There is a bit of variation between the above sources but in general as of Jan 2011 looks like 1024x768 is about 20%, while ~85% have a higher resolution of at least 1280x768 (1280x800 is the most common of these with 15-20% of total web, depending on the source; 1280x1024 and 1366x768 follow behind with 9-14% of the share). My guess would be that the higher resolution values will be even more common if we filter on North America, and even higher if we filter on N.American corporate users (unfortunately I couldn't find any free geographically filtered statistics). Another point to note is that the 1024x768 desktop user population is likely lower than the aforementioned 20%, seeing as the iPad (1024x768 native display) is likely propping up those number. My recommendation would be to optimize around the 1280x768 constraint (*note: 1280x768 is actually a relatively rare resolution, but I think it's a valid constraint range considering that 1366x768 is relatively common and 1280 is the most common horizontal resolution). Browser + OS Constraints: To further add to the constraints we have to subtract the space taken up by the browser (assuming IE, which is the most space consuming) and the OS (assuming WinXP-Win7): Win7 has the biggest taskbar footprint at a height of 40px (XP's and Vista's is 30px) The default IE8 view uses up 25px at the bottom of the screen with the status bar and a further 120px at the top of the screen with the windows title bar and the browser UI (assuming the default 'favorites' toolbar is present, it would instead be 91px without the favorites toolbar). Assuming no scrollbar, we also loose a total of 4px horizontally for the window outline. This means that we are left with 583px of vertical space and 1276px of horizontal. In other words, a Web Safe Area of 1276 x 583 Is this a correct line of thinking? I tried to Google some design best practices but most still talk about designing around 1024x768 which seems to be quickly disappearing. Any help on this would be greatly appreciated! Thanks.

    Read the article

  • Calling a static method on a generic type parameter

    - by Remi Despres-Smyth
    I was hoping to do something like this, but it appears to be illegal in C#: public Collection MethodThatFetchesSomething<T>() where T : SomeBaseClass { return T.StaticMethodOnSomeBaseClassThatReturnsCollection(); } I get a compile-time error: "'T' is a 'type parameter', which is not valid in the given context." Given a generic type parameter, how can I call a static method on the generic class? The static method has to be available, given the constraint.

    Read the article

  • Can LINQ-to-SQL omit unspecified columns on insert so a database default value is used?

    - by Todd Ropog
    I have a non-nullable database column which has a default value set. When inserting a row, sometimes a value is specified for the column, sometimes one is not. This works fine in TSQL when the column is omitted. For example, given the following table: CREATE TABLE [dbo].[Table1]( [id] [int] IDENTITY(1,1) NOT NULL, [col1] [nvarchar](50) NOT NULL, [col2] [nvarchar](50) NULL, CONSTRAINT [PK_Table1] PRIMARY KEY CLUSTERED ([id] ASC) ) GO ALTER TABLE [dbo].[Table1] ADD CONSTRAINT [DF_Table1_col1] DEFAULT ('DB default') FOR [col1] The following two statements will work: INSERT INTO Table1 (col1, col2) VALUES ('test value', '') INSERT INTO Table1 (col2) VALUES ('') In the second statement, the default value is used for col1. The problem I have is when using LINQ-to-SQL (L2S) with a table like this. I want to produce the same behavior, but I can't figure out how to make L2S do that. I want to be able to run the following code and have the first row get the value I specify and the second row get the default value from the database: var context = new DataClasses1DataContext(); var row1 = new Table1 { col1 = "test value", col2 = "" }; context.Table1s.InsertOnSubmit(row1); context.SubmitChanges(); var row2 = new Table1 { col2 = "" }; context.Table1s.InsertOnSubmit(row2); context.SubmitChanges(); If the Auto Generated Value property of col1 is False, the first row is created as desired, but the second row fails with a null error on col1. If Auto Generated Value is True, both rows are created with the default value from the database. I've tried various combinations of Auto Generated Value, Auto-Sync and Nullable, but nothing I've tried gives the behavior I want. L2S does not omit the column from the insert statement when no value is specified. Instead it does something like this: INSERT INTO Table1 (col1, col2) VALUES (null, '') ...which of course causes a null error on col1. Is there some way to get L2S to omit a column from the insert statement if no value is given? Or is there some other way to get the behavior I want? I need the default value at the database level because not all row inserts are done via L2S, and in some cases the default value is a little more complex than a hard coded value (e.g. creating the default based on another field) so I'd rather avoid duplicating that logic.

    Read the article

  • Will this force a reinitialize in Merge Replication Topology?

    - by Refracted Paladin
    I need to add a couple of columns to a table that is a part of a replication set. It is not a constraint coulumn or a part of any article filters and it allows NULL. I have a pretty good idea that I can run this -- ALTER TABLE tblPlanDomain ADD ReportWageES VARCHAR (100) NULL and NOT force all my clients to reinitialize but I was hoping for some reassurance. Can anyone verify this one way or the other for me? Thanks,

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >