Search Results

Search found 61631 results on 2466 pages for 'duplicate data'.

Page 447/2466 | < Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >

  • why am i getting error in this switch statement written in c

    - by mekasperasky
    I have a character array b which stores different identifiers in different iterations . I have to compare b with various identifiers of the programming language C and print it into a file . When i do it using the following switch statement it gives me errors b[i]='\0'; switch(b[i]) { case "if":fprintf(fp2,"if ----> IDENTIFIER \n"); case "then":fprintf(fp2,"then ----> IDENTIFIER \n"); case "else":fprintf(fp2,"else ----> IDENTIFIER \n"); case "switch":fprintf(fp2,"switch ----> IDENTIFIER \n"); case 'printf':fprintf(fp2,"prtintf ----> IDENTIFIER \n"); case 'scanf':fprintf(fp2,"else ----> IDENTIFIER \n"); case 'NULL':fprintf(fp2,"NULL ----> IDENTIFIER \n"); case 'int':fprintf(fp2,"INT ----> IDENTIFIER \n"); case 'char':fprintf(fp2,"char ----> IDENTIFIER \n"); case 'float':fprintf(fp2,"float ----> IDENTIFIER \n"); case 'long':fprintf(fp2,"long ----> IDENTIFIER \n"); case 'double':fprintf(fp2,"double ----> IDENTIFIER \n"); case 'char':fprintf(fp2,"char ----> IDENTIFIER \n"); case 'const':fprintf(fp2,"const ----> IDENTIFIER \n"); case 'continue':fprintf(fp2,"continue ----> IDENTIFIER \n"); case 'break':fprintf(fp2,"long ----> IDENTIFIER \n"); case 'for':fprintf(fp2,"long ----> IDENTIFIER \n"); case 'size of':fprintf(fp2,"size of ----> IDENTIFIER \n"); case 'register':fprintf(fp2,"register ----> IDENTIFIER \n"); case 'short':fprintf(fp2,"short ----> IDENTIFIER \n"); case 'auto':fprintf(fp2,"auto ----> IDENTIFIER \n"); case 'while':fprintf(fp2,"while ----> IDENTIFIER \n"); case 'do':fprintf(fp2,"do ----> IDENTIFIER \n"); case 'case':fprintf(fp2,"case ----> IDENTIFIER \n"); } the error being lex.c:94:13: warning: character constant too long for its type lex.c:95:13: warning: character constant too long for its type lex.c:96:13: warning: multi-character character constant lex.c:97:13: warning: multi-character character constant lex.c:98:13: warning: multi-character character constant lex.c:99:13: warning: character constant too long for its type lex.c:100:13: warning: multi-character character constant lex.c:101:13: warning: character constant too long for its type lex.c:102:13: warning: multi-character character constant lex.c:103:13: warning: character constant too long for its type lex.c:104:13: warning: character constant too long for its type lex.c:105:13: warning: character constant too long for its type lex.c:106:13: warning: multi-character character constant lex.c:107:13: warning: character constant too long for its type lex.c:108:13: warning: character constant too long for its type lex.c:109:13: warning: character constant too long for its type lex.c:110:12: warning: multi-character character constant lex.c:111:13: warning: character constant too long for its type lex.c:112:13: warning: multi-character character constant lex.c:113:13: warning: multi-character character constant lex.c: In function ‘int main()’: lex.c:90: error: case label does not reduce to an integer constant lex.c:91: error: case label does not reduce to an integer constant lex.c:92: error: case label does not reduce to an integer constant lex.c:93: error: case label does not reduce to an integer constant lex.c:94: warning: overflow in implicit constant conversion lex.c:95: warning: overflow in implicit constant conversion lex.c:95: error: duplicate case value lex.c:94: error: previously used here lex.c:96: warning: overflow in implicit constant conversion lex.c:97: warning: overflow in implicit constant conversion lex.c:98: warning: overflow in implicit constant conversion lex.c:99: warning: overflow in implicit constant conversion lex.c:99: error: duplicate case value lex.c:97: error: previously used here lex.c:100: warning: overflow in implicit constant conversion lex.c:101: warning: overflow in implicit constant conversion lex.c:102: warning: overflow in implicit constant conversion lex.c:102: error: duplicate case value lex.c:98: error: previously used here lex.c:103: warning: overflow in implicit constant conversion lex.c:103: error: duplicate case value lex.c:97: error: previously used here lex.c:104: warning: overflow in implicit constant conversion lex.c:104: error: duplicate case value lex.c:101: error: previously used here lex.c:105: warning: overflow in implicit constant conversion lex.c:106: warning: overflow in implicit constant conversion lex.c:106: error: duplicate case value lex.c:98: error: previously used here lex.c:107: warning: overflow in implicit constant conversion lex.c:107: error: duplicate case value lex.c:94: error: previously used here lex.c:108: warning: overflow in implicit constant conversion lex.c:108: error: duplicate case value lex.c:98: error: previously used here lex.c:109: warning: overflow in implicit constant conversion lex.c:109: error: duplicate case value lex.c:97: error: previously used here lex.c:110: warning: overflow in implicit constant conversion lex.c:111: warning: overflow in implicit constant conversion lex.c:111: error: duplicate case value lex.c:101: error: previously used here lex.c:112: warning: overflow in implicit constant conversion lex.c:112: error: duplicate case value lex.c:110: error: previously used here lex.c:113: warning: overflow in implicit constant conversion lex.c:113: error: duplicate case value lex.c:101: error: previously used here

    Read the article

  • Is there a format or service for resume/CV data?

    - by Ben Dauphinee
    I have noticed through the process of signing up for various freelance and job seeking or professional network sites that they all want your resume/CV data. And I am really getting tired of copy/pasting this data, especially since I have a website. Is there a standard format or service somewhere that I do not know about for this data? If not, does anyone want to help me build something like this out? I'm thinking a service similar to OpenID that allows you to maintain a central resume to have your data pulled from. No more filling in the same data over and over, and having to maintain the copies on any of the plethora of websites that have that data. Takers?

    Read the article

  • How best to convert CakePHP date picker form data to a PHP DateTime object?

    - by Daren Thomas
    I'm doing this in app/views/mymodel/add.ctp: <?php echo $form->input('Mymodel.mydatefield'); ?> And then, in app/controllers/mymodel_controller.php: function add() { # ... (if we have some submitted data) $datestring = $this->data['Mymodel']['mydatefield']['year'] . '-' . $this->data['Mymodel']['mydatefield']['month'] . '-' . $this->data['Mymodel']['mydatefield']['day']; $mydatefield = DateTime::createFromFormat('Y-m-d', $datestring); } There absolutly has to be a better way to do this - I just haven't found the CakePHP way yet... What I would like to do is: function add() { # ... (if we have some submitted data) $mydatefield = $this->data['Mymodel']['mydatefiled']; # obviously doesn't work }

    Read the article

  • How to think in data stores instead of databases?

    - by Jim
    As an example, Google App Engine uses data stores, not a database, to store data. Does anybody have any tips for using data stores instead of databases? It seems I've trained my mind to think 100% in object relationships that map directly to table structures, and now it's hard to see anything differently. I can understand some of the benefits of data stores (e.g. performance and the ability to distribute data), but some good database functionality is sacrificed (e.g. joins). Does anybody who has worked with data stores like BigTable have any good advice to working with them?

    Read the article

  • Showing a loading spinner only if the data has not been cached.

    - by Aaron Mc Adam
    Hi guys, Currently, my code shows a loading spinner gif, returns the data and caches it. However, once the data has been cached, there is a flicker of the loading gif for a split second before the data gets loaded in. It's distracting and I'd like to get rid of it. I think I'm using the wrong method in the beforeSend function here: $.ajax({ type : "GET", cache : false, url : "book_data.php", data : { keywords : keywords, page : page }, beforeSend : function() { $('.jPag-pages li:not(.cached)').each(function (i) { $('#searchResults').html('<p id="loader">Loading...<img src="../assets/images/ajax-loader.gif" alt="Loading..." /></p>'); }); }, success : function(data) { $('.jPag-current').parent().addClass('cached'); $('#searchResults').replaceWith($(data).find('#searchResults')).find('table.sortable tbody tr:odd').addClass('odd'); detailPage(); selectForm(); } });

    Read the article

  • Is there a way to only backup a SQL 2005 database structure fully, but only the data in a certain se

    - by TheSoftwareJedi
    I have several schemas in my database, and the largest one ("large" meaning disk space consumed) is my "web" schema which is a denormalized copy of data in the operational schemas. This denormalized data is able to be reconstructed at anytime, and is merely there for extremely fast read purposes. Since the data is redundant, and VERY large - I'd like to exclude it from being backed up. I already have stored procedures that can regenerate all of the data in that schema in a couple of hours - for use in the event of a failure. I assume I can split the tables in this schema out to another data file or such (ideally even on another drive for faster reads), but is there a way to never have that data file backup, yet still in the event of a failure its structure could be restored (and other DDL stuff like procs, views, etc)? Somewhat related, can I also have these tables not do transaction logging, if I go to "Full" backup mode for the rest of the database?

    Read the article

  • How can I structure and recode messy categorical data in R?

    - by briandk
    I'm struggling with how to best structure categorical data that's messy, and comes from a dataset I'll need to clean. The Coding Scheme I'm analyzing data from a university science course exam. We're looking at patterns in student responses, and we developed a coding scheme to represent the kinds of things students are doing in their answers. A subset of the coding scheme is shown below. Note that within each major code (1, 2, 3) are nested non-unique sub-codes (a, b, ...). What the Raw Data Looks Like I've created an anonymized, raw subset of my actual data which you can view here. Part of my problem is that those who coded the data noticed that some students displayed multiple patterns. The coders' solution was to create enough columns (reason1, reason2, ...) to hold students with multiple patterns. That becomes important because the order (reason1, reason2) is arbitrary--two students (like student 41 and student 42 in my dataset) who correctly applied "dependency" should both register in an analysis, regardless of whether 3a appears in the reason column or the reason2 column. How Can I Best Structure Student Data? Part of my problem is that in the raw data, not all students display the same patterns, or the same number of them, in the same order. Some students may do just one thing, others may do several. So, an abstracted representation of example students might look like this: Note in the example above that student002 and student003 both are coded as "1b", although I've deliberately shown the order as different to reflect the reality of my data. My (Practical) Questions Should I concatenate reason1, reason2, ... into one column? How can I (re)code the reasons in R to reflect the multiplicity for some students? Thanks I realize this question is as much about good data conceptualization as it is about specific features of R, but I thought it would be appropriate to ask it here. If you feel it's inappropriate for me to ask the question, please let me know in the comments, and stackoverflow will automatically flood my inbox with sadface emoticons. If I haven't been specific enough, please let me know and I'll do my best to be clearer.

    Read the article

  • Assigning a variable of a struct that contains an instance of a class to another variable

    - by xport
    In my understanding, assigning a variable of a struct to another variable of the same type will make a copy. But this rule seems broken as shown on the following figure. Could you explain why this happened? using System; namespace ReferenceInValue { class Inner { public int data; public Inner(int data) { this.data = data; } } struct Outer { public Inner inner; public Outer(int data) { this.inner = new Inner(data); } } class Program { static void Main(string[] args) { Outer p1 = new Outer(1); Outer p2 = p1; Console.WriteLine("p1:{0}, p2:{1}", p1.inner.data, p2.inner.data); p1.inner.data = 2; Console.WriteLine("p1:{0}, p2:{1}", p1.inner.data, p2.inner.data); p2.inner.data = 3; Console.WriteLine("p1:{0}, p2:{1}", p1.inner.data, p2.inner.data); Console.ReadKey(); } } }

    Read the article

  • How do I write raw binary data in Python?

    - by Chris B.
    I've got a Python program that stores and writes data to a file. The data is raw binary data, stored internally as str. I'm writing it out through a utf-8 codec. However, I get UnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 25: character maps to <undefined> in the cp1252.py file. This looks to me like Python is trying to interpret the data using the default code page. But it doesn't have a default code page. That's why I'm using str, not unicode. I guess my questions are: How do I represent raw binary data in memory, in Python? When I'm writing raw binary data out through a codec, how do I encode/unencode it?

    Read the article

  • What are the repercussions of not checking existing data when adding a foreign key?

    - by scottm
    I've inherited a database that doesn't exactly strive for data integrity. I am trying to add some foreign keys to change that, but there is data in some tables that doesn't fit the constraints. Most likely, the data won't be used again so I want to know what problems I might face by leaving it there. The other option I see is to move it into some kind of table without referential constraints, just for historical purposes. So, what are the repercussions of not checking existing data? If I create a foreign key constraint on a table and don't check existing data, will all new data inserted into the table be enforced?

    Read the article

  • Want to save data field from form into two columns of two models.

    - by vette982
    I have a Profile model with a hasOne relationship to a Detail model. I have a registration form that saves data into both model's tables, but I want the username field from the profile model to be copied over to the usernamefield in the details model so that each has the same username. function new_account() { if(!empty($this->data)) { $this->Profile->modified = date("Y-m-d H:i:s"); if($this->Profile->save($this->data)) { $this->data['Detail']['profile_id'] = $this->Profile->id; $this->data['Detail']['username'] = $this->Profile->username; $this->Profile->Detail->save($this->data); $this->Session->setFlash('Your registration was successful.'); $this->redirect(array('action'=>'index')); } } } This code in my Profile controller gives me the error: Undefined property: Profile::$username Any ideas?

    Read the article

  • How the existing data to be if entity structure modified or deleted on GAE?

    - by Eonil
    GAE recommends using JDO/JPA. But I have serious question about using OODB like them. JDO based on user's class structure. And data structure should be modified continually as service advances. So, If data(entity) class property being removed, what happened to existing data on the property? If data(entity) class renamed for refactoring reason, how the JDO know those renaming? Or all data loss? Major point is "How JDO/GAE/BigTable applies modification of class into existing entity structure and data?".

    Read the article

  • best way to store php data on a page for use with javascript/jquery?

    - by Haroldo
    Ok, so im trying to work out the fastest way of storing data on my page without slowing the page load: I need to store information in the page to be later used by jquery. My page is an events page and i want to attach data to each event anchor. there are 100+ events to attach data to. The events anchors are created with a php loop, so i could create the data elements within this loop using either use un-semantic tags ie *rel="some_data"* create a jquery.data() for each iteration of the loop or i could run the loop again, separately, this time inside script tags with jquery.data(); would really appreciate any thoughts on this!

    Read the article

  • Offsite data storage for simple app, or a similar supported persistence mechanism?

    - by jdk
    Question Is there a usable facebook entry point to the Data Storage API that facebook lists on their app admin page for developers, or should I consider an alternate mechanism? What alternative mechanisms exist to simply persist my information offsite (away from my server app) without stuffing it into a cookie that's prone to expire? ... Background The facebook Data Store Admin tool is made available in a facebook App's Settings as seen here: (continue reading below) However when I visit the DataStoreAdmin link nothing works (i.e. clicking the buttons to define the data store types and objects does nothing - I have tried different browsers). The Wiki page for Data Store API hasn't been updated recently and the second last update says the beta Data Store was taken offline. It seems odd the link would be readily available and highly visible at the top of the App configuration area if indeed it's defunct. I was hoping some kind of key/value pair solution to remove the data calls from my own server.

    Read the article

  • How to convert searchTwitter results (from library(twitteR)) into a data.frame?

    - by analyticsPierce
    I am working on saving twitter search results into a database (SQL Server) and am getting an error when I pull the search results from twitteR. If I execute: library(twitteR) puppy <- as.data.frame(searchTwitter("puppy", session=getCurlHandle(),num=100)) I get an error of: Error in as.data.frame.default(x[[i]], optional = TRUE) : cannot coerce class structure("status", package = "twitteR") into a data.frame This is important because in order to use RODBC to add this to a table using sqlSave it needs to be a data.frame. At least that's the error message I got: Error in sqlSave(localSQLServer, puppy, tablename = "puppy_staging", : should be a data frame So does anyone have any suggestions on how to coerce the list to a data.frame or how I can load the list through RODBC?

    Read the article

  • What's the most simple way to retrieve all data from a table and save it back in .NET 3.5?

    - by zoman
    I have a number of tables containing some basic (business related) mapping data. What's the most simple way to load the data from those tables, then save the modified values back. (all data should be replaced in the tables) An ORM is out of question as I would like to avoid creating domain objects for each table. The actual editing of the data is not an issue. (it is exported into Excel where the data is edited, then the file is uploaded with the modified data) The technology is .NET 3.5 (ASP.NET MVC) and SQL Server 2005. Thanks.

    Read the article

  • RabbitMQ as a proxy between a data store and a producer ?

    - by hyperboreean
    I have some code that produces lots of data that should be stored in the database. The problem is that the database can't keep with the data that it gets produced. So I am wondering whether some kind of queuing mechanism would help in this situation - I am thinking in particular at RabiitMQ and whether is feasible to have the data stored in its queues until some consumer gets the data out of it and pushes it to the database. Also, I am not particular interested whether that data made it to the database or not because pretty soon, the same data will be updated.

    Read the article

  • how to use javascript to download a file on Chrome without Chrome auto renaming file to "download"? [duplicate]

    - by user3688566
    This question already has an answer here: Is there any way to specify a suggested filename when using data: URI? 11 answers I use javascript to generate a file and download. It seems that depending on the version of chrome, the download file names can be auto renamed to 'download'. is there a way to avoid it? this is my code: var link = document.createElement("a"); link.setAttribute("href", 'data:application/octet-stream,' + 'file content here'); link.setAttribute("download", 'file1.txt'); link.click(); This is not a duplicated question because i am using the latest chrome and the previously suggested hyperlink is exactly what i am using. I think chrome v34 works fine. but once my chrome autoupdated to v35, it went back to 'download' file name.

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Oracle OpenWorld 2013 – Wrap up by Sven Bernhardt

    - by JuergenKress
    OOW 2013 is over and we’re heading home, so it is time to lean back and reflecting about the impressions we have from the conference. First of all: OOW was great! It was a pleasure to be a part of it. As already mentioned in our last blog article: It was the biggest OOW ever. Parallel to the conference the America’s Cup took place in San Francisco and the Oracle Team America won. Amazing job by the team and again congratulations from our side Back to the conference. The main topics for us are: Oracle SOA / BPM Suite 12c Adaptive Case management (ACM) Big Data Fast Data Cloud Mobile Below we will go a little more into detail, what are the key takeaways regarding the mentioned points: Oracle SOA / BPM Suite 12c During the five days at OOW, first details of the upcoming major release of Oracle SOA Suite 12c and Oracle BPM Suite 12c have been introduced. Some new key features are: Managed File Transfer (MFT) for transferring big files from a source to a target location Enhanced REST support by introducing a new REST binding Introduction of a generic cloud adapter, which can be used to connect to different cloud providers, like Salesforce Enhanced analytics with BAM, which has been totally reengineered (BAM Console now also runs in Firefox!) Introduction of templates (OSB pipelines, component templates, BPEL activities templates) EM as a single monitoring console OSB design-time integration into JDeveloper (Really great!) Enterprise modeling capabilities in BPM Composer These are only a few points from what is coming with 12c. We are really looking forward for the new realese to come out, because this seems to be really great stuff. The suite becomes more and more integrated. From 10g to 11g it was an evolution in terms of developing SOA-based applications. With 12c, Oracle continues it’s way – very impressive. Adaptive Case Management Another fantastic topic was Adaptive Case Management (ACM). The Oracle PMs did a great job especially at the demo grounds in showing the upcoming Case Management UI (will be available in 11g with the next BPM Suite MLR Patch), the roadmap and the differences between traditional business process modeling. They have been very busy during the conference because a lot of partners and customers have been interested Big Data Big Data is one of the current hype themes. Because of huge data amounts from different internal or external sources, the handling of these data becomes more and more challenging. Companies have a need for analyzing the data to optimize their business. The challenge is here: the amount of data is growing daily! To store and analyze the data efficiently, it is necessary to have a scalable and flexible infrastructure. Here it is important that hardware and software are engineered to work together. Therefore several new features of the Oracle Database 12c, like the new in-memory option, have been presented by Larry Ellison himself. From a hardware side new server machines like Fujitsu M10 or new processors, such as Oracle’s new M6-32 have been announced. The performance improvements, when using one of these hardware components in connection with the improved software solutions were really impressive. For more details about this, please take look at our previous blog post. Regarding Big Data, Oracle also introduced their Big Data architecture, which consists of: Oracle Big Data Appliance that is preconfigured with Hadoop Oracle Exdata which stores a huge amount of data efficently, to achieve optimal query performance Oracle Exalytics as a fast and scalable Business analytics system Analysis of the stored data can be performed using SQL, by streaming the data directly from Hadoop to an Oracle Database 12c. Alternatively the analysis can be directly implemented in Hadoop using “R”. In addition Oracle BI Tools can be used to analyze the data. Fast Data Fast Data is a complementary approach to Big Data. A huge amount of mostly unstructured data comes in via different channels with a high frequency. The analysis of these data streams is also important for companies, because the incoming data has to be analyzed regarding business-relevant patterns in real-time. Therefore these patterns must be identified efficiently and performant. To do so, in-memory grid solutions in combination with Oracle Coherence and Oracle Event Processing demonstrated very impressive how efficient real-time data processing can be. One example for Fast Data solutions that was shown during the OOW was the analysis of twitter streams regarding customer satisfaction. The feeds with negative words like “bad” or “worse” have been filtered and after a defined treshold has been reached in a certain timeframe, a business event was triggered. Cloud Another key trend in the IT market is of course Cloud Computing and what it means for companies and their businesses. Oracle announced their Cloud strategy and vision – companies can focus on their real business while all of the applications are available via Cloud. This also includes Oracle Database or Oracle Weblogic, so that companies can also build, deploy and run their own applications within the cloud. Three different approaches have been introduced: Infrastructure as a Service (IaaS) Platform as a Service (PaaS) Software as a Service (SaaS) Using the IaaS approach only the infrastructure components will be managed in the Cloud. Customers will be very flexible regarding memory, storage or number of CPUs because those parameters can be adjusted elastically. The PaaS approach means that besides the infrastructure also the platforms (such as databases or application servers) necessary for running applications will be provided within the Cloud. Here customers can also decide, if installation and management of these infrastructure components should be done by Oracle. The SaaS approach describes the most complete one, hence all applications a company uses are managed in the Cloud. Oracle is planning to provide all of their applications, like ERP systems or HR applications, as Cloud services. In conclusion this seems to be a very forward-thinking strategy, which opens up new possibilities for customers to manage their infrastructure and applications in a flexible, scalable and future-oriented manner. As you can see, our OOW days have been very very interresting. We collected many helpful informations for our projects. The new innovations presented at the confernce are great and being part of this was even greater! We are looking forward to next years’ conference! Links: http://www.oracle.com/openworld/index.html http://thecattlecrew.wordpress.com/2013/09/23/first-impressions-from-oracle-open-world-2013 SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: cattleCrew,Sven Bernhard,OOW2013,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • org.apache.cxf.interceptor.Fault: Unmarshalling Error: Duplicate default namespace declaration.

    - by JohnC
    Not sure why I am receiving this after the webservice ran and I am trying to return back to my client side bean. The webservice works perfectly outside of my webserver in SoapUI. org.apache.cxf.interceptor.Fault: Unmarshalling Error: Duplicate default namespace declaration. at [row,col {unknown-source}]: [1,321] at org.apache.cxf.jaxb.JAXBEncoderDecoder.unmarshall(JAXBEncoderDecoder.java:764) at org.apache.cxf.jaxb.JAXBEncoderDecoder.unmarshall(JAXBEncoderDecoder.java:623) at org.apache.cxf.jaxb.io.DataReaderImpl.read(DataReaderImpl.java:128) at org.apache.cxf.interceptor.DocLiteralInInterceptor.handleMessage(DocLiteralInInterceptor.java:101) at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:236) at org.apache.cxf.endpoint.ClientImpl.onMessage(ClientImpl.java:671) at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:2177) at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponse(HTTPConduit.java:2057) at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1982) at org.apache.cxf.transport.AbstractConduit.close(AbstractConduit.java:66) at org.apache.cxf.transport.http.HTTPConduit.close(HTTPConduit.java:637) at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:62) at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:236) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:483) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:309) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:261) at org.apache.cxf.frontend.ClientProxy.invokeSync(ClientProxy.java:73) at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:124)

    Read the article

  • SQL: How to select rows from a table while ignoring the duplicate field values?

    - by Maxxon
    How to select rows from a table while ignoring the duplicate field values? Here is an example: id user_id message 1 Adam "Adam is here." 2 Peter "Hi there this is Peter." 3 Peter "I am getting sick." 4 Josh "Oh, snap. I'm on a boat!" 5 Tom "This show is great." 6 Laura "Textmate rocks." What i want to achive is to select the recently active users from my db. Let's say i want to select the 5 recently active users. The problem is, that the following script selects Peter twice. mysql_query("SELECT * FROM messages ORDER BY id DESC LIMIT 5 "); What i want is to skip the row when it gets again to Peter, and select the next result, in our case Adam. So i don't want to show my visitors that the recently active users were Laura, Tom, Josh, Peter, and Peter again. That does not make any sense, instead i want to show them this way: Laura, Tom, Josh, Peter, (skipping Peter) and Adam. Is there an SQL command i can use for this problem?

    Read the article

  • How to delete duplicate vectors within a multidimensional vector?

    - by David
    I have a vector of vectors: vector< vector<int> > BigVec; It contains an arbitrary number of vectors, each of an arbitrary size. I want to delete not duplicate elements of each vector, but any vectors that are the exact same as another. I don't need to preserve the order of the vectors so I can sort etc.. It should be a really simple problem to solve but I'm new to this, my (not-working) best effort: for (int i = 0; i < BigVec.size(); i++) { for (int j = 1; j < BigVec.size() ; j++ ) { if (BigVec[i][0] == BigVec [j][i]); { BigVec.erase(BigVec.begin() + j); i = 0; // because i get the impression deleting a j = 1; // vector messes up a simple iteration through } } } I think there might be a solution using Unique(), but I can't get that to work either.

    Read the article

  • How can I remove an "ALMOST" Duplicate using LINQ? ( OR SQL? )

    - by Atomiton
    This should be and easy one for the LINQ gurus out there. I'm doing a complex Query using UNIONS and CONTAINSTABLE in my database to return ranked results to my application. I'm getting duplicates in my returned data. This is expected. I'm using CONTAINSTABLE and CONTAINS to get all the results I need. CONTAINSTABLE is ranked by SQL and CONTAINS (which is run only on the Keywords field ) is hard-code-ranked by me. ( Sorry if that doesn't make sense ) Anyway, because the tuples aren't identical ( their rank is different ) a duplicate is returned. I figure the best way to deal with this is use LINQ. I know I'll be using the Distinct() extension method, but do I have to implement the IEqualityComparer interface? I'm a little fuzzy on how to do this. For argument's sake, say my resultset is structured like this class: class Content { ContentID int //KEY Rank int Description String } If I have a List<Content> how would I write the Distinct() method to exclude Rank? Ideally I'd like to keep the Content's highest Rank. SO, if one Content's RAnk is 112 and the other is 76. I'd like to keep the 112 rank. Hopefully I've given enough information.

    Read the article

  • Django unit testing: South-migrated DB works in MySQL, throws duplicate PK error in PostGreSQL. Am I

    - by unclaimedbaggage
    Hi folks, (Worth starting off with a disclaimer: I'm very new to PostGreSQL) I have a django site which involves a standard app/tests.py testing file. If I migrate the DB to MySQL (through South),, the tests all pass. However in PostGresQL, I'm getting the following error: IntegrityError: duplicate key value violates unique constraint "business_contact_pkey" Note this happens while unit testing only - the actual page runs fine in both MySQL & PostGresql. Really having a heckuva time figuring this one out. Anyone have ideas? Below are the Postgresql "\d business_contact" & offending tests.py method if they help. No changes made to either DB except the (same) South migrations Thanks first_name | character varying(200) | not null mobile_phone | character varying(100) | surname | character varying(200) | not null business_id | integer | not null created | timestamp with time zone | not null deleted | boolean | not null default false updated | timestamp with time zone | not null slug | character varying(150) | not null phone | character varying(100) | email | character varying(75) | id | integer | not null default nextval('business_contact_id_seq'::regclass) Indexes: "business_contact_pkey" PRIMARY KEY, btree (id) "business_contact_slug_key" UNIQUE, btree (slug) "business_contact_business_id" btree (business_id) Foreign-key constraints: "business_id_refs_id_772cc1b7b40f4b36" FOREIGN KEY (business_id) REFERENCES business(id) DEFERRABLE INITIALLY DEFERRED Referenced by: TABLE "business" CONSTRAINT "primary_contact_id_refs_id_dfaf59c4041c850" FOREIGN KEY (primary_contact_id) REFERENCES business_contact(id) DEFERRABLE INITIALLY DEFERRED TEST DEF: def test_add_business_contact(self): """ Add a business contact """ contact_slug = 'test-new-contact-added-new-adf' business_id = 1 business = Business.objects.get(id=business_id) postdata = { 'first_name': 'Test', 'surname': 'User', 'business': '1', 'slug': contact_slug, 'email': '[email protected]', 'phone': '12345678', 'mobile_phone': '9823452', 'business': 1, 'business_id': 1, } #Test to ensure contacts that should not exist are not returned contact_not_exists = Contact.objects.filter(slug=contact_slug) self.assertFalse(contact_not_exists) #Add the contact and ensure it is present in the DB afterwards """ contact_add_url = '%s%s/contact/add/' % (settings.BUSINESS_URL, business.slug) self.client.post(contact_add_url, postdata) added_contact = Contact.objects.filter(slug=contact_slug) print added_contact try: self.assertTrue(added_contact) except: formset = ContactForm(postdata) print formset.errors self.assertFalse(True, "Contact not found in the database - most likely, the post values in the test didn't validate against the form")

    Read the article

< Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >