Search Results

Search found 14874 results on 595 pages for 'mysql connector'.

Page 437/595 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • How do I deploy a charm from a local repository?

    - by Matt McClean
    I am trying to run the Charm tutorial from the juju documentation by creating a new charm from a local repository. I started by installing the charms from bzr to my local ubuntu 12.04 desktop running in a virtual machine. The new file structure is the following: ubuntu@ubuntu-VirtualBox:~$ find charms/precise/drupal/ charms/precise/drupal/ charms/precise/drupal/hooks charms/precise/drupal/hooks/db-relation-changed charms/precise/drupal/hooks/install charms/precise/drupal/hooks/start charms/precise/drupal/hooks/stop charms/precise/drupal/metadata.yml charms/precise/drupal/README When I install the mysql charm, which was downloaded from the remote charm repository, it works fine. However when I run the following command to deploy the new charm it fails with the following error message: ubuntu@ubuntu-VirtualBox:~$ juju deploy --repository=charms local:precise/drupal 2012-05-09 10:01:05,671 INFO Searching for charm local:precise/drupal in local charm repository: /home/ubuntu/charms 2012-05-09 10:01:05,845 WARNING Charm '.mrconfig' has an error: CharmError() Error processing '/home/ubuntu/charms/precise/.mrconfig': unable to process /home/ubuntu/charms/precise/.mrconfig into a charm Charm 'local:precise/drupal' not found in repository /home/ubuntu/charms 2012-05-09 10:01:06,217 ERROR Charm 'local:precise/drupal' not found in repository /home/ubuntu/charms Is there some file missing in the drupal charm directory that juju needs to make the charm valid? Also, I get the file processing error for the .mrconfig file also when deploying the mysql charm so is there something I need to change there perhaps?

    Read the article

  • i can't uninstall ubuntu software

    - by cunix
    root@cunix:/home/cunix# sudo apt-get remove fern-wifi-cracker Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libqt4-test libqt4-sql-mysql mysql-common libqt4-xmlpatterns libqt4-help python-qt4 python-sip libqt4-sql-sqlite libqt4-sql macchanger libqt4-designer libmysqlclient16 python-scapy libqt4-scripttools Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: fern-wifi-cracker 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. After this operation, 3,514kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 167661 files and directories currently installed.) Removing fern-wifi-cracker ... dpkg (subprocess): unable to execute installed pre-removal script (/var/lib/dpkg/info/fern-wifi-cracker.prerm): Exec format error dpkg: error processing fern-wifi-cracker (--remove): subprocess installed pre-removal script returned error exit status 2 Errors were encountered while processing: fern-wifi-cracker E: Sub-process /usr/bin/dpkg returned an error code (1) how to uninstall?

    Read the article

  • please help me understand libraries/includes

    - by fiftyeight
    I'm trying to understand how libraries work. for example I downloaded a tarball and extracted it. Now I do "./configure", it searches in pre-defined directories from what I understand for certain library files. What does it do then? it creates a makefile, and the makefile contains the paths to these libraries? than I do "make", it complies the source code and hard-codes the locations of the libraries? am I correct? I do not really understand if libraries are files with pre-defined paths or the OS somehow gives access to the libraries through system calls. another example, I complied something on my computer than moved it to a remote server, the executable needs mysql libraries to work, the server has mysql but for some reason when execute the file it tells me "can't find libmysqlclient.so.16". is there a solution for this? is there a way to know where is tries to locate this file or give it another path? I can't compile it on the server since it has no compiler and I don't have root access to install packages last question is if in the sequence "./configure","make","make install" the "make install" command is the only one that actually puts files outside the directory in which these files reside? if for example the software will be installed in /usr/local/ is the "make install" command the only one that will require "sudo" before it? let me see if I got it correctly: "./configure" creates the Makefile according to the location of various files on your system. "make" compiles the source code according to this makefile. and "make install" moves the files to their appropriate location. I know this has been very long I thank anyone who had the patience to read my question :)

    Read the article

  • Need help to make a decision in career switch over? [closed]

    - by Fero
    I am a Software Engineer having 4 Years of experinece in web development using PHP, Drupal, MySql, Ajax and client site technologies like javascript, jquery,html and more. I have decided two platforms to switch over my career. SAP-ABAP (Because ABAP is related to coding) SALES FORCE One and only reason is that I am not getting good pack for the technologies what I am working with. Even top level companies are not ready to pay for this technologies. (And I am not expecting more.) To be honest I am good at technical and HR interviews too. So, I started to make an analysis of highly payable platforms and I got these two. SAP and Salesforce (Probabilty of On-site opportunity is also very high on both) Here my questions are: I am totally new to the above mentioned technologies. Which will be best suit for me ? Having basic ideas of the platforms what I have decided - But I am confused to choose I am having Good Coding experiencein PHP, Drupal as well as good experience in MySql. Having very good experience in creating sites related to E-Commerce, LMS, Q&A sites, Travel Sites, Blogs, Social networking site and more. Which I can learn easily or for which I can get good documentations online Kindly understand that I am not creating a debate over here. I hope Professionals over here can Show me the correct path.... I am waiting to travel on that...

    Read the article

  • Which language is more suitable heavy file tasks?

    - by All
    I need to write a script (based on basic functions) to process /image/audio/video files. The process is mainly filesystem tasks and converts. The database of files has been stored by mysql. The script is simple but cause heavy tasks on the system; for example renaming/converting/copying thousands of file in a run. The script does not read the content of files into memory, it just manage the commands for sub-processes. The main weight is on the communication with filesystem. The script will be used regularly for new files. My concern is about performance. I am thinking of Shell script a complied language like C Please advise which programming language is more suitable for this purpose and why? UPDATE: An example is to scan a folder for images, convert them with ImageMagick, move files to destination folder, get file info, then update the database. As you can see, the process has no room for optimization, and most of languages have similar APIs for popular programs like ImageMagick, MySQL, etc. Thus, it can be written in any language. I just wish to reduce resource usage by speeding up the long loop. NOTE: I know that questions about comparing languages are not favorable, but I really had problem to choose, because the problems can appear in action.

    Read the article

  • Linux distro for software development support?

    - by Xie Jilei
    I've spent too much time on setup & maintain a development server, which contains following tools: Common services like SSH, BIND, rsync, etc. Subversion, Git. Apache server, which runs CGit, Trac, Webmin, phpmyadmin, phppgadmin, etc. Jetty, which runs Archiva and Hudson. Bugzilla. PostgresSQL server, MySQL server. I've created a lot of Debian packages, like my-trac-utils, my-bugzilla-utils, my-bind9-utils, my-mysql-utils, etc. to make my life more convenient. However, I still feel I need a lot more utils. And I've spent a lot of time to maintain these packages, too. I think there maybe many developers doing the same things. As tools like subversion, git, trac are so common today. It's not to hard to install and configure each of them, but it took a long time to install them all. And it's time consuming to maintain them. Like backup the data, plot the usage graph and generate web reports. (gitstat for example) So, I'd like to hear if there exist any pre-configured distro for Development Server purpose, i.e., something like BackTrack for hackers?

    Read the article

  • Amazon AMIs and Oracle VM templates

    - by llaszews
    I have worked with Oracle VM templates and most recently with Amazon Machine Images (AMI). The similarities in the functionality and capabilities they provide are striking. Just take a look a the definitions: An Amazon Machine Image (AMI) is a special type of pre-configured operating system and virtual application software which is used to create a virtual machine within the Amazon Elastic Compute Cloud (EC2). It serves as the basic unit of deployment for services delivered using EC2. AWS AMIs Oracle VM Templates provide an innovative approach to deploying a fully configured software stack by offering pre-installed and pre-configured software images. Use of Oracle VM Templates eliminates the installation and configuration costs, and reduces the ongoing maintenance costs helping organizations achieve faster time to market and lower cost of operations. Oracle VM Templates Other things they have in common: 1. Both have 35 Oracle images or templates: AWS AMI pre-built images Oracle pre-built VM Templates 2. Both allow to build your own images or templates: A. OVM template builder - OVM Template Builder - Oracle VM Template Builder, an open source, graphical utility that makes it easy to use Oracle Enterprise Linux “Just enough OS” (JeOS)–based scripts for developing pre-packaged virtual machines for Oracle VM. B. AMI 'builder' - AMI builder However, AWS has the added feature/benefit of adding your own AMI to the AWS AMI catalog: AMI - Adding to the AWS AMI catalog Another plus with AWS and AMI is there are hundreds of MySQL AMIs (AWS MySQL AMIs ). A benefit of Oracle VM templates is they can run on any public or private cloud environment, not just AWS EC2. However, with Oracle VM templates they first need to be images as AMIs before they can run in the AWS cloud.

    Read the article

  • flat files vs. RDBMS database, few read/writes, few changes

    - by Bob Lapique
    I have to handle data from long term (years, decades) climate monitoring stations. The data flow usually starts with raw data (voltages, etc.) plus quality check information (pressure, temperature, flow rate, etc.) generally recorded @ 1Hz. Then, the data are assigned a quality flag (human and/or program), processed (apply calibration curves) and flagged. So, we basically end up with 2 datasets : raw and processed data. New data are typically added once a day (~500Ko/day/instrument). Simultaneous queries are not likely to ever happen. I wanted to go for a RDBMS (we have a MySQL server) and have some experience in database design, but the IT guy keeps telling me that flat files will to the job just as well. I suspect him to try to make his life easier when it comes to backup/upgrade the MySQL. There are not so many links between data, they don't change much, but the quality flags will change. A RDBMS is easier to compare data from different instruments on a "many days" scale, compared to daily text files. Well, what would you advise ? Thanks.

    Read the article

  • Git repo: Unravelling my mess into tidy branches

    - by Martin
    I wanted to play with a project, so git cloned it and, following its instructions, created a local branch for my configuration (I guess so that users can merge updates back). At first I was just tweaking to suit my preferences, so I didn't bother with any further branching, but now I have some code that might be useful to someone else, but with my passwords, etc in the same branch. Effectively, I have one big branch from which I'd like to have: Postgres backend (default) but with some new code I've added MySQL backend (the biggest change I've made) with that same new code My settings: I can't git ignore the settings file because I occasionally have to add sections for new functionality, but I need to keep my personal settings out of the public branches! I guess this would work best as a local-only branch. Dev branches, which I would branch from the MySQL. Starting from scratch, I think I could figure out how to branch/merge the various updates, but is there an easy way to walk through the existing repo and choose which commits to apply to which branch? Or possibly create a branch from a point upstream then merge back, excluding certain commits?

    Read the article

  • How do I start working as a programmer - what do I need?

    - by giorgo
    Hello, i am currently learning Java and PHP as I have some projects from university, which require me to apply both languages. Specifically, a Java GUI application, connecting to a MySQL database and a web application that will be implemented in PHP/MySQL. I have started learning the MVC pattern, Struts, Spring and I am also learning PHP with zend. My first question is: How can I find employment as a programmer/software engineer? The reason I ask is because I have sent my CV into many companys, but all of them stated that I required work experience. I really need some guidance on how to improve my career opportunites. At present, I work on my own and haven't worked in collaboration with anyone on a particular project. I'm assuming most people create projects and submit them along with their CVs. My second question is: Everyone has to make a start from somewhere, but what if this somewhere doesn't come? What do I need to do to create the circumstances where I can easily progress forward? Thanks

    Read the article

  • O'reilly certification in PHP worth it?

    - by editzombie
    I asked this question over on stack overflow but I didn't realise it wasn't really the place for not so technical questions. I've seen quite a few related threads on this forum so I thought I'd try and get some feedback here: This is my first time asking a question on this forum, though I´ve read it a lot. I apologise if this is repeating a thread. I´m interested in getting into web development. I am a video editor by trade but living in Spain the way things are at the moment its very difficult to find work. I have some very basic knowledge of HTML and CSS and a little bit of flash and have designed a few little personal websites myself. I also worked for a online marketing production company where I worked a little on blog design in Blogger amongst other social media. So thats my background, but I´m trying to expand my skills and get into web development as a career or in general part of my skill base, I was thinking particularly about PHP/MySQL. I have worked a little on some of the Lynda.com tutorials and have invested in a book (Sams Teach Yourself PHP, MySQL and Apache). I´m still finding it very difficult to progress. I know I should really try some practice projects (any reccomendations would be welcome). But I was also thinking about doing one of the O´Reilly certification courses and was wondering whether it would be worthwhile for a noob like me. I hear that the courses are associated with an American University which I guess gives it more clout. Any other thoughts you guys have about how to make progress in learning web development would be fantasic. Thanks in advance.

    Read the article

  • MongoDB: Replicate data in documents vs. “join”

    - by JavierCane
    Disclaimer: This is a question derived from this one. What do you think about the following example of use case? I have a table containing orders. These orders has a lot of related information needed by my current queries (think about the products; the buyer information; the region, country and state of the sale point; and so on) In order to think with a de-normalized approach, I don't have to put identifiers of these related items in my main orders collection. Instead, I have to repeat all the information for each order (ie: I will repeat the buyer's name, surname, etc. for each of its orders). Assuming the previous premise, I'm committing to maintain all the data related to an order without a lot of updates (because if I modify the buyer's name, I'll have to iterate through all orders updating the ones made by the same buyer, and as MongoDB blocks at a document level on updates, I would be blocking the entire order at the update moment). I'll have to replicate all the products' related data? (ie: category, maker and optional attributes like color, size…) What if a new feature is requested and I've to make a lot of queries with the products "as the entry point of the query"? (ie: reports showing the products' sales performance grouping by region, country, or whatever) Is it fair enough to apply the $unwind operation to my orders original collection? (What about the performance?) I should have to do another collection with these queries in mind and replicate again all the products' information (and their orders)? Wouldn't be better to store a product_id in the original orders collection in order to be more tolerable to requirements change? (What about emulating JOINs?) The optimal approach would be a mixed solution with a RDBMS system like MySQL in order to retrieve the complete data? I mean: store products, users, and location identifiers in the orders collection and have queries in MySQL like getAllUsersDataByIds in which I would perform a SELECT * FROM users WHERE user_id IN ( :identifiers_retrieved_from_the_mongodb_query )

    Read the article

  • Use a SQL Database for a Desktop Game

    - by sharethis
    Developing a Game Engine I am planning a computer game and its engine. There will be a 3 dimensional world with first person view and it will be single player for now. The programming language is C++ and it uses OpenGL. Data Centered Design Decision My design decision is to use a data centered architecture where there is a global event manager and a global data manager. There are many components like physics, input, sound, renderer, ai, ... Each component can trigger and listen to events. Moreover, each component can read, edit, create and remove data. The question is about the data manager. Whether to Use a Relational Database Should I use a SQL Database, e.g. SQLite or MySQL, to store the game data? This contains virtually all game content like items, characters, inventories, ... Except of meshes and textures which are even more performance related, so I will keep them in memory. Is a SQL database fast enough to use it for realtime reading and writing game informations, like the position of a moving character? I also need to care about cross-platform compatibility. Aside from keeping everything in memory, what alternatives do I have? Advantages Would Be The advantages of using a relational database like MySQL would be the data orientated structure which allows fast computation. I would not need objects for representing entities. I could easily query data of objects near the player needed for rendering. And I don't have to take care about data of objects far away. Moreover there would be no need for savegames since the hole game state is saved in the database. Last but not least, expanding the game to an online game would be relative easy because there already is a place where the hole game state is stored.

    Read the article

  • How to access Ubuntu Server from local PC?

    - by Roland
    Today I installed my first web server, which is Ubuntu 12.04 LTS. I got Apache, PHP and MySql working, there is even MyPHPAdmin! Everything is working fine on that PC, but the problem is that I have no idea how to connect to this server from my PC. Just to clarify- I got one PC that I work on and got another one, which has Ubuntu Server running. I even managed to connect them through the router, which I made to work as a switch. I can see the Ubuntu Server on my Windows PC in "Network", but it's empty, I can't see any files. I tried to share a folder etc/www on Server, but it shows an error saying something about right, that I'm not this folder's owner. I guess I'm not doing the right thing at all, am I? Even if I could see shared folder on my Windows PC- I would still be not able to type "somedomain.com" on Windows PC and access for example index.php or MySql database. So, the question is- how do I configure Ubuntu Server to be accessible from Windows PC?

    Read the article

  • Work @ Java shop. Tasked to redo intranet. I only know PHP - CTO says use that; Ops Says no - we have JSP, use that - Thoughts?

    - by Mackysback
    So I work as a project manager and was given the opportunity to redo the intranet and Internet site... I'd love to , great addition to my resume. However.... I am not familiar with java nor JSP... I am fairly proficient with PHP and MySQL ... Also my intention was to use a CMS that I have experience with, wordpress, drupal or joomla... The site would be very simple so I was thinking wordpress would be fine for this. I told management that I would need it to run on PHP and they gave me their blessing... Now the disconnect bw it and management ... I spoke with ops and they told me that we have java and JSP for that purpose although the IT guy did say "oh but I do know that php and mysql are gaining popularity" That said ... I do not understand much as far as systems architecture... We do self host and run websphere with IIS and jboss with tomcat. Any advice or suggestions? Thanks ! Is my request that unfeasible?

    Read the article

  • Mirroring of Apps across servers

    - by user1038814
    We wish to host multiple apps across multiple servers. What we are looking for (ideally) is an existing solution which will work. For example, normally to do it we'd follow a route (for failover) like: App is installed on one server along with mysql database App is also installed on a second server. Rsync is used to mirror the files over to the second server and ensure consistency MySQL is installed with a Master-Slave setup. We use a service such as DNS Made Easy which has a DNS failover. If one server goes down it automatically routes traffic to the backup server We have done the above a few times and generally its fine. The issue I have here is that the above is for one app. What I would like to look at is how we can manage for multiple apps and if there is a layer (such as VMWare) that has complete mirroring built in at the OS level? For example how do web hosts currently do it when they ensure that more than one machine is running a bunch of hosted websites. If you were running hosting and you had 200 clients on a server you would want the same clients across 2 or more servers and want everything mirrored. Any advice would be much appreciated.

    Read the article

  • C++ Database vs Reading Files

    - by Ohmages
    Ive been programing a C++ game/server for the past year. I have been using MYSQL for character logins, items, monsters, etc, etc. (im on windows). My question is, what are some of the databases that some big time developers use. IE. Battle.net, Diablo II, Diablo III, mythos, hellgate , etc, etc, etc. Do they have their own database they built? Or do they use an existing framework for logins, and character transfers. I do know that in diablo II, they use character files to to transfer characters into the game world. But what about the login into battle.net. Would it be wiser for me to stick with MYSQL, or is there something out there faster and more stable, or should I create a login type of system that looks through a file to see if you provided the correct password. Can't wait to get some replies. Thanks! PS. Currently the framework is much like battle.net, where you login into a lobby, create, and join games. The game server/lobby server are different servers too. So im just wondering about the lobby server for logins because I'm expecting several hundred thousand connections/logins.

    Read the article

  • round off and displaying the values

    - by S.PRATHIBA
    Hi all, I have the following code: import java.io.; import java.sql.; import java.math.; import java.lang.; public class Testd1{ public static void main(String[] args) { System.out.println("Sum of the specific column!"); Connection con = null; int m=1; double sum,sum1,sum2; int e[]; e=new int[100]; int p; int decimalPlaces = 5; for( int i=0;i< e.length;i++) { e[i]=0; } double b2,c2,d2,u2,v2; int i,j,k,x,y; double mat[][]=new double[10][10]; try { Class.forName("com.mysql.jdbc.Driver"); con = DriverManager.getConnection ("jdbc:mysql://localhost:3306/prathi","root","mysql"); try{ Statement st = con.createStatement(); ResultSet res = st.executeQuery("SELECT Service_ID,SUM(consumer_feedback) FROM consumer1 group by Service_ID"); while (res.next()){ int data=res.getInt(1); System.out.println(data); System.out.println("\n\n"); int c1 = res.getInt(2); e[m]=res.getInt(2); if(e[m]<0) e[m]=0; m++; System.out.print(c1); System.out.println("\t\t"); } sum=e[1]+e[2]+e[3]+e[4]+e[5]; System.out.println("\n \n The sum is" +sum); for( p=21; p<=25; p++) { if(e[p] != 0) e[p]=e[p]/(int)sum; //I have type casted sum to get output BigDecimal bd1 = new BigDecimal(e[p]); bd1 = bd1.setScale(decimalPlaces, BigDecimal.ROUND_HALF_UP); // setScale is immutable e[p] = bd1.intValue(); System.out.println("\n\n The normalized value is" +e[p]); mat[4][p-21]=e[p]; } } catch (SQLException s){ System.out.println("SQL statement is not executed!"); } } catch (Exception e1){ e1.printStackTrace(); } } } I have a table named consumer1.After calculating the sum i am getting the values as follows mysql select Service_ID,sum(consumer_feedback) from consumer1 group by Service_ ID; Service_ID sum(consumer_feedback) 31 17 32 0 33 60 34 38 35 | 38 In my program I am getting the sum for each Service_ID correctly.But,after normalization ie while I am calculating 17/153=0.111 I am getting the normalized value is 0.I want the normalized values to be displayed correctly after rounding off.My output is as follows C:javac Testd1.java C:java Testd1 Sum of the specific column! 31 17 32 0 33 60 34 38 35 38 The sum is153.0 The normalized value is0 The normalized value is0 The normalized value is0 The normalized value is0 The normalized value is0 But,after normalization i want to get 17/153=0.111 I am getting the normalized value is 0.I want these values to be rounded off.

    Read the article

  • Handling file upload in a non-blocking manner

    - by Kaliyug Antagonist
    The background thread is here Just to make objective clear - the user will upload a large file and must be redirected immediately to another page for proceeding different operations. But the file being large, will take time to be read from the controller's InputStream. So I unwillingly decided to fork a new Thread to handle this I/O. The code is as follows : The controller servlet /** * @see HttpServlet#doPost(HttpServletRequest request, HttpServletResponse * response) */ protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { // TODO Auto-generated method stub System.out.println("In Controller.doPost(...)"); TempModel tempModel = new TempModel(); tempModel.uploadSegYFile(request, response); System.out.println("Forwarding to Accepted.jsp"); /*try { Thread.sleep(1000 * 60); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); }*/ request.getRequestDispatcher("/jsp/Accepted.jsp").forward(request, response); } The model class package com.model; import java.io.IOException; import java.util.concurrent.ExecutionException; import java.util.concurrent.Future; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import com.utils.ProcessUtils; public class TempModel { public void uploadSegYFile(HttpServletRequest request, HttpServletResponse response) { // TODO Auto-generated method stub System.out.println("In TempModel.uploadSegYFile(...)"); /* * Trigger the upload/processing code in a thread, return immediately * and notify when the thread completes */ try { FileUploaderRunnable fileUploadRunnable = new FileUploaderRunnable( request.getInputStream()); /* * Future<FileUploaderRunnable> future = ProcessUtils.submitTask( * fileUploadRunnable, fileUploadRunnable); * * FileUploaderRunnable processed = future.get(); * * System.out.println("Is file uploaded : " + * processed.isFileUploaded()); */ Thread uploadThread = new Thread(fileUploadRunnable); uploadThread.start(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } /* * catch (InterruptedException e) { // TODO Auto-generated catch block * e.printStackTrace(); } catch (ExecutionException e) { // TODO * Auto-generated catch block e.printStackTrace(); } */ System.out.println("Returning from TempModel.uploadSegYFile(...)"); } } The Runnable package com.model; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.nio.ByteBuffer; import java.nio.channels.Channels; import java.nio.channels.ReadableByteChannel; public class FileUploaderRunnable implements Runnable { private boolean isFileUploaded = false; private InputStream inputStream = null; public FileUploaderRunnable(InputStream inputStream) { // TODO Auto-generated constructor stub this.inputStream = inputStream; } public void run() { // TODO Auto-generated method stub /* Read from InputStream. If success, set isFileUploaded = true */ System.out.println("Starting upload in a thread"); File outputFile = new File("D:/06c01_output.seg");/* * This will be changed * later */ FileOutputStream fos; ReadableByteChannel readable = Channels.newChannel(inputStream); ByteBuffer buffer = ByteBuffer.allocate(1000000); try { fos = new FileOutputStream(outputFile); while (readable.read(buffer) != -1) { fos.write(buffer.array()); buffer.clear(); } fos.flush(); fos.close(); readable.close(); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } System.out.println("File upload thread completed"); } public boolean isFileUploaded() { return isFileUploaded; } } My queries/doubts : Spawning threads manually from the Servlet makes sense to me logically but scares me coding wise - the container isn't aware of these threads after all(I think so!) The current code is giving an Exception which is quite obvious - the stream is inaccessible as the doPost(...) method returns before the run() method completes : In Controller.doPost(...) In TempModel.uploadSegYFile(...) Returning from TempModel.uploadSegYFile(...) Forwarding to Accepted.jsp Starting upload in a thread Exception in thread "Thread-4" java.lang.NullPointerException at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:512) at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:497) at org.apache.coyote.http11.InternalInputBuffer$InputStreamInputBuffer.doRead(InternalInputBuffer.java:559) at org.apache.coyote.http11.AbstractInputBuffer.doRead(AbstractInputBuffer.java:324) at org.apache.coyote.Request.doRead(Request.java:422) at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:287) at org.apache.tomcat.util.buf.ByteChunk.substract(ByteChunk.java:407) at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:310) at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:202) at java.nio.channels.Channels$ReadableByteChannelImpl.read(Unknown Source) at com.model.FileUploaderRunnable.run(FileUploaderRunnable.java:39) at java.lang.Thread.run(Unknown Source) Keeping in mind the point 1., does the use of Executor framework help me in anyway ? package com.utils; import java.util.concurrent.Future; import java.util.concurrent.ScheduledThreadPoolExecutor; public final class ProcessUtils { /* Ensure that no more than 2 uploads,processing req. are allowed */ private static final ScheduledThreadPoolExecutor threadPoolExec = new ScheduledThreadPoolExecutor( 2); public static <T> Future<T> submitTask(Runnable task, T result) { return threadPoolExec.submit(task, result); } } So how should I ensure that the user doesn't block and the stream remains accessible so that the (uploaded)file can be read from it?

    Read the article

  • Using the login Details via Application

    - by ramin ss
    I have a CURL(in C++) to send my user and pass to remauth.php file so i think i do something wrong on remuth.php ( because i am basic in php and my program can not run because the auth not passed.) I use login via Application. my CURL: bool Auth_PerformSessionLogin(const char* username, const char* password) { curl_global_init(CURL_GLOBAL_ALL); CURL* curl = curl_easy_init(); if (curl) { char url[255]; _snprintf(url, sizeof(url), "http://%s/remauth.php", "SITEADDRESS.com"); char buf[8192] = {0}; char postBuf[8192]; _snprintf(postBuf, sizeof(postBuf), "%s&&%s", username, password); curl_easy_setopt(curl, CURLOPT_URL, url); curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, AuthDataReceived); curl_easy_setopt(curl, CURLOPT_WRITEDATA, (void*)&buf); curl_easy_setopt(curl, CURLOPT_USERAGENT, "IW4M"); curl_easy_setopt(curl, CURLOPT_FAILONERROR, true); curl_easy_setopt(curl, CURLOPT_POST, 1); curl_easy_setopt(curl, CURLOPT_POSTFIELDS, postBuf); curl_easy_setopt(curl, CURLOPT_POSTFIELDSIZE, -1); curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, false); CURLcode code = curl_easy_perform(curl); curl_easy_cleanup(curl); curl_global_cleanup(); if (code == CURLE_OK) { return Auth_ParseResultBuffer(buf); } else { Auth_Error(va("Could not reach the SITEADDRESS.comt server. Error code from CURL: %x.", code)); } return false; } curl_global_cleanup(); return false; } and my remauth.php: <?php ob_start(); $host=""; // Host name $dbusername=""; // Mysql username $dbpassword=""; // Mysql password $db_name=""; // Database name $tbl_name=""; // Table name // Connect to server and select databse. mysql_connect("$host", "$dbusername", "$dbpassword") or die(mysql_error()); mysql_select_db("$db_name") or die(mysql_error()); // Define $username and $password //$username=$username; //$password=md5($_POST['password']); //$password=$password; $username=$_POST['username']; $password=$_POST['password']; //$post_item[]='action='.$_POST['submit']; // To protect MySQL injection (more detail about MySQL injection) $username = stripslashes($username); $password = stripslashes($password); $username = mysql_real_escape_string($username); $password = mysql_real_escape_string($password); $sql="SELECT * FROM $tbl_name WHERE username='$username'"; $result=mysql_query($sql); // Mysql_num_row is counting table row $count=mysql_num_rows($result); // If result matched $username and $password, table row must be 1 row if($count==1){ $row = mysql_fetch_assoc($result); if (md5(md5($row['salt']).md5($password)) == $row['password']){ session_register("username"); session_register("password"); echo "#"; return true; } else { echo "o"; return false; } } else{ echo "o"; return false; } ob_end_flush(); ?> ///////////////////////////////////

    Read the article

  • JPA : optimize EJB-QL query involving large many-to-many join table

    - by Fabien
    Hi all. I'm using Hibernate Entity Manager 3.4.0.GA with Spring 2.5.6 and MySql 5.1. I have a use case where an entity called Artifact has a reflexive many-to-many relation with itself, and the join table is quite large (1 million lines). As a result, the HQL query performed by one of the methods in my DAO takes a long time. Any advice on how to optimize this and still use HQL ? Or do I have no choice but to switch to a native SQL query that would perform a join between the table ARTIFACT and the join table ARTIFACT_DEPENDENCIES ? Here is the problematic query performed in the DAO : @SuppressWarnings("unchecked") public List<Artifact> findDependentArtifacts(Artifact artifact) { Query query = em.createQuery("select a from Artifact a where :artifact in elements(a.dependencies)"); query.setParameter("artifact", artifact); List<Artifact> list = query.getResultList(); return list; } And the code for the Artifact entity : package com.acme.dependencytool.persistence.model; import java.util.ArrayList; import java.util.List; import javax.persistence.CascadeType; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.FetchType; import javax.persistence.GeneratedValue; import javax.persistence.Id; import javax.persistence.JoinColumn; import javax.persistence.JoinTable; import javax.persistence.ManyToMany; import javax.persistence.Table; import javax.persistence.UniqueConstraint; @Entity @Table(name = "ARTIFACT", uniqueConstraints={@UniqueConstraint(columnNames={"GROUP_ID", "ARTIFACT_ID", "VERSION"})}) public class Artifact { @Id @GeneratedValue @Column(name = "ID") private Long id = null; @Column(name = "GROUP_ID", length = 255, nullable = false) private String groupId; @Column(name = "ARTIFACT_ID", length = 255, nullable = false) private String artifactId; @Column(name = "VERSION", length = 255, nullable = false) private String version; @ManyToMany(cascade=CascadeType.ALL, fetch=FetchType.EAGER) @JoinTable( name="ARTIFACT_DEPENDENCIES", joinColumns = @JoinColumn(name="ARTIFACT_ID", referencedColumnName="ID"), inverseJoinColumns = @JoinColumn(name="DEPENDENCY_ID", referencedColumnName="ID") ) private List<Artifact> dependencies = new ArrayList<Artifact>(); public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getGroupId() { return groupId; } public void setGroupId(String groupId) { this.groupId = groupId; } public String getArtifactId() { return artifactId; } public void setArtifactId(String artifactId) { this.artifactId = artifactId; } public String getVersion() { return version; } public void setVersion(String version) { this.version = version; } public List<Artifact> getDependencies() { return dependencies; } public void setDependencies(List<Artifact> dependencies) { this.dependencies = dependencies; } } Thanks in advance. EDIT 1 : The DDLs are generated automatically by Hibernate EntityMananger based on the JPA annotations in the Artifact entity. I have no explicit control on the automaticaly-generated join table, and the JPA annotations don't let me explicitly set an index on a column of a table that does not correspond to an actual Entity (in the JPA sense). So I guess the indexing of table ARTIFACT_DEPENDENCIES is left to the DB, MySQL in my case, which apparently uses a composite index based on both clumns but doesn't index the column that is most relevant in my query (DEPENDENCY_ID). mysql describe ARTIFACT_DEPENDENCIES; +---------------+------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------+------+-----+---------+-------+ | ARTIFACT_ID | bigint(20) | NO | MUL | NULL | | | DEPENDENCY_ID | bigint(20) | NO | MUL | NULL | | +---------------+------------+------+-----+---------+-------+ EDIT 2 : When turning on showSql in the Hibernate session, I see many occurences of the same type of SQL query, as below : select dependenci0_.ARTIFACT_ID as ARTIFACT1_1_, dependenci0_.DEPENDENCY_ID as DEPENDENCY2_1_, artifact1_.ID as ID1_0_, artifact1_.ARTIFACT_ID as ARTIFACT2_1_0_, artifact1_.GROUP_ID as GROUP3_1_0_, artifact1_.VERSION as VERSION1_0_ from ARTIFACT_DEPENDENCIES dependenci0_ left outer join ARTIFACT artifact1_ on dependenci0_.DEPENDENCY_ID=artifact1_.ID where dependenci0_.ARTIFACT_ID=? Here's what EXPLAIN in MySql says about this type of query : mysql explain select dependenci0_.ARTIFACT_ID as ARTIFACT1_1_, dependenci0_.DEPENDENCY_ID as DEPENDENCY2_1_, artifact1_.ID as ID1_0_, artifact1_.ARTIFACT_ID as ARTIFACT2_1_0_, artifact1_.GROUP_ID as GROUP3_1_0_, artifact1_.VERSION as VERSION1_0_ from ARTIFACT_DEPENDENCIES dependenci0_ left outer join ARTIFACT artifact1_ on dependenci0_.DEPENDENCY_ID=artifact1_.ID where dependenci0_.ARTIFACT_ID=1; +----+-------------+--------------+--------+-------------------+-------------------+---------+---------------------------------------------+------+-------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+--------+-------------------+-------------------+---------+---------------------------------------------+------+-------+ | 1 | SIMPLE | dependenci0_ | ref | FKEA2DE763364D466 | FKEA2DE763364D466 | 8 | const | 159 | | | 1 | SIMPLE | artifact1_ | eq_ref | PRIMARY | PRIMARY | 8 | dependencytooldb.dependenci0_.DEPENDENCY_ID | 1 | | +----+-------------+--------------+--------+-------------------+-------------------+---------+---------------------------------------------+------+-------+ EDIT 3 : I tried setting the FetchType to LAZY in the JoinTable annotation, but I then get the following exception : Hibernate: select artifact0_.ID as ID1_, artifact0_.ARTIFACT_ID as ARTIFACT2_1_, artifact0_.GROUP_ID as GROUP3_1_, artifact0_.VERSION as VERSION1_ from ARTIFACT artifact0_ where artifact0_.GROUP_ID=? and artifact0_.ARTIFACT_ID=? 51545 [btpool0-2] ERROR org.hibernate.LazyInitializationException - failed to lazily initialize a collection of role: com.acme.dependencytool.persistence.model.Artifact.dependencies, no session or session was closed org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: com.acme.dependencytool.persistence.model.Artifact.dependencies, no session or session was closed at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:380) at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationExceptionIfNotConnected(AbstractPersistentCollection.java:372) at org.hibernate.collection.AbstractPersistentCollection.readSize(AbstractPersistentCollection.java:119) at org.hibernate.collection.PersistentBag.size(PersistentBag.java:248) at com.acme.dependencytool.server.DependencyToolServiceImpl.createArtifactViewBean(DependencyToolServiceImpl.java:93) at com.acme.dependencytool.server.DependencyToolServiceImpl.createArtifactViewBean(DependencyToolServiceImpl.java:109) at com.acme.dependencytool.server.DependencyToolServiceImpl.search(DependencyToolServiceImpl.java:48) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:527) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:166) at com.google.gwt.user.server.rpc.RemoteServiceServlet.doPost(RemoteServiceServlet.java:86) at javax.servlet.http.HttpServlet.service(HttpServlet.java:637) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:487) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:362) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:729) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:505) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:843) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:647) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:205) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:380) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:395) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:488)

    Read the article

  • Adding UCM as a search source in Windows Explorer

    - by kyle.hatlestad
    A customer recently pointed out to me that Windows 7 supports federated search within Windows Explorer. This means you can perform searches to external sources such as Google, Flickr, YouTube, etc right from within Explorer. While we do have the Desktop Integration Suite which offers searching within Explorer, I thought it would be interesting to look into this method which would not require any client software to implement. Basically, federated searching hooks up in Windows Explorer through the OpenSearch protocol. A Search Connector Descriptor file is run and it installs the search provider. The file is a .osdx file which is an OpenSearch Description document. It describes the search provider you are hooking up to along with the URL for the query. If those results can come back as an RSS or ATOM feed, then you're all set. So the first step is to install the RSS Feeds component from the UCM Samples page on OTN. If you're on 11g, I've found the RSS Feeds works just fine on that version too. Next, you want to perform a Quick Search with a particular search term and then copy the RSS link address for that search result. Here is what an example URL might looks like: http://server:16200/cs/idcplg?IdcService=GET_SCS_FEED&feedName=search_results&QueryText=%28+%3cqsch%3eoracle%3c%2fqsch %3e+%29&SortField=dInDate&SortOrder=Desc&ResultCount=20&SearchQueryFormat= Universal&SearchProviders=server& Now you want to create a new text file and start out with this information: <?xml version="1.0" encoding="UTF-8"?><OpenSearchDescription xmlns:ms-ose="http://schemas.microsoft.com/opensearchext/2009/"> <ShortName></ShortName> <Description></Description> <Url type="application/rss+xml" template=""/> <Url type="text/html" template=""/> </OpenSearchDescription> Enter a ShortName and Description. The ShortName will be the value used when displaying the search provider in Explorer. In the template attribute for the first Url element, enter the URL copied previously. You will then need to convert the ampersand symbols to '&' to make them XML compliant. Finally, you'll want to switch out the search term with '{searchTerms}'. For the second Url element, you can do the same thing except you want to copy the UCM search results URL from the page of results. That URL will look something like: http://server:16200/cs/idcplg?IdcService=GET_SEARCH_RESULTS&SortField=dInDate&SortOrder=Desc&ResultCount=20&QueryText=%3Cqsch%3Eoracle%3C%2Fqsch%3E&listTemplateId= &ftx=1&SearchQueryFormat=Universal&TargetedQuickSearchSelection= &MiniSearchText=oracle Again, convert the ampersand symbols and replace the search term with '{searchTerms}'. When complete, save the file with the .osdx extension. The completed file should look like: <?xml version="1.0" encoding="UTF-8"?> <OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/" xmlns:ms-ose="http://schemas.microsoft.com/opensearchext/2009/"> <ShortName>Universal Content Management</ShortName> <Description>OpenSearch for UCM via Windows 7 Search Federation.</Description> <Url type="application/rss+xml" template="http://server:16200/cs/idcplg?IdcService=GET_SCS_FEED&amp;feedName=search_results&amp;QueryText=%28+%3Cqsch%3E{searchTerms}%3C%2fqsch%3E+%29&amp;SortField=dInDate&amp;SortOrder=Desc&amp;ResultCount=200&amp;SearchQueryFormat=Universal"/> <Url type="text/html" template="http://server:16200/cs/idcplg?IdcService=GET_SEARCH_RESULTS&amp;SortField=dInDate&amp;SortOrder=Desc&amp;ResultCount=20&amp;QueryText=%3Cqsch%3E{searchTerms}%3C%2Fqsch%3E&amp;listTemplateId=&amp;ftx=1&amp;SearchQueryFormat=Universal&amp;TargetedQuickSearchSelection=&amp;MiniSearchText={searchTerms}"/> </OpenSearchDescription> After you save the file, simply double-click it to create the provider. It will ask if you want to add the search connector to Windows. Click Add and it will add it to the Searches folder in your user folder as well as your Favorites. Now just click on the search icon and in the upper right search box, enter your term. As you are typing, it begins executing searches and the results will come back in Explorer. Now when you double-click on an item, it will try and download the web viewable for viewing. You also have the ability to save the search, just as you would in UCM. And there is a link to Search On Website which will launch your browser and go directly to the search results page there. And with some tweaks to the RSS component, you can make the results a bit more interesting. It supports the Media RSS standard, so you can pass along the thumbnail of the documents in the results. To enable this, edit the rss_resources.htm file in the RSS Feeds component. In the std_rss_feed_begin resource include, add the namespace 'xmlns:media="http://search.yahoo.com/mrss/' to the rss definition: <rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:media="http://search.yahoo.com/mrss/"> Next, in the rss_channel_item_with_thumb include, below the closing image element, add this element: </images> <media:thumbnail url="<$if strIndexOf(thumbnailUrl, "@t") > 0 or strIndexOf(thumbnailUrl, "@g") > 0 or strIndexOf(thumbnailUrl, "@p") > 0$><$rssHttpHost$><$thumbnailUrl$><$elseif dGif$><$HttpWebRoot$>images/docgifs/<$dGif$><$endif$>" /> <description> This and lots of other tweaks can be done to the RSS component to help extend it for optimum use in Explorer. Hopefully this can get you started. *Note: This post also applies to Universal Records Management (URM).

    Read the article

  • Nagy dobás készül az Oracle adatányászati felületen, Oracle Data Mining

    - by Fekete Zoltán
    Ahogyan már a tavaly oszi Oracle OpenWorld hírekben és eloadásokban is láthattuk a beharangozót, az Oracle nagy dobásra készül az adatbányászati fronton (Oracle Data Mining), mégpedig a remekül használható adatbányászati motor grafikus felületének a kiterjesztésével. Ha jól megfigyeljük ezt az utóbbi linket, az eddigi grafikus felület már Oracle Data Miner Classic néven fut. Hogyan is lehet használni az Oracle Data Mining-ot? - Oracle Data Miner (ingyenesen letöltheto GUI az OTN-rol) - Java-ból és PL/SQL-bol, Oracle Data Mining JDeveloper and SQL Developer Extensions - Excel felületrol, Oracle Spreadsheet Add-In for Predictive Analytics - ODM Connector for mySAP BW Oracle Data Mining technikai információ.

    Read the article

  • Big Data – Operational Databases Supporting Big Data – RDBMS and NoSQL – Day 12 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Cloud in the Big Data Story. In this article we will understand the role of Operational Databases Supporting Big Data Story. Even though we keep on talking about Big Data architecture, it is extremely crucial to understand that Big Data system can’t just exist in the isolation of itself. There are many needs of the business can only be fully filled with the help of the operational databases. Just having a system which can analysis big data may not solve every single data problem. Real World Example Think about this way, you are using Facebook and you have just updated your information about the current relationship status. In the next few seconds the same information is also reflected in the timeline of your partner as well as a few of the immediate friends. After a while you will notice that the same information is now also available to your remote friends. Later on when someone searches for all the relationship changes with their friends your change of the relationship will also show up in the same list. Now here is the question – do you think Big Data architecture is doing every single of these changes? Do you think that the immediate reflection of your relationship changes with your family member is also because of the technology used in Big Data. Actually the answer is Facebook uses MySQL to do various updates in the timeline as well as various events we do on their homepage. It is really difficult to part from the operational databases in any real world business. Now we will see a few of the examples of the operational databases. Relational Databases (This blog post) NoSQL Databases (This blog post) Key-Value Pair Databases (Tomorrow’s post) Document Databases (Tomorrow’s post) Columnar Databases (The Day After’s post) Graph Databases (The Day After’s post) Spatial Databases (The Day After’s post) Relational Databases We have earlier discussed about the RDBMS role in the Big Data’s story in detail so we will not cover it extensively over here. Relational Database is pretty much everywhere in most of the businesses which are here for many years. The importance and existence of the relational database are always going to be there as long as there are meaningful structured data around. There are many different kinds of relational databases for example Oracle, SQL Server, MySQL and many others. If you are looking for Open Source and widely accepted database, I suggest to try MySQL as that has been very popular in the last few years. I also suggest you to try out PostgreSQL as well. Besides many other essential qualities PostgreeSQL have very interesting licensing policies. PostgreSQL licenses allow modifications and distribution of the application in open or closed (source) form. One can make any modifications and can keep it private as well as well contribute to the community. I believe this one quality makes it much more interesting to use as well it will play very important role in future. Nonrelational Databases (NOSQL) We have also covered Nonrelational Dabases in earlier blog posts. NoSQL actually stands for Not Only SQL Databases. There are plenty of NoSQL databases out in the market and selecting the right one is always very challenging. Here are few of the properties which are very essential to consider when selecting the right NoSQL database for operational purpose. Data and Query Model Persistence of Data and Design Eventual Consistency Scalability Though above all of the properties are interesting to have in any NoSQL database but the one which most attracts to me is Eventual Consistency. Eventual Consistency RDBMS uses ACID (Atomicity, Consistency, Isolation, Durability) as a key mechanism for ensuring the data consistency, whereas NonRelational DBMS uses BASE for the same purpose. Base stands for Basically Available, Soft state and Eventual consistency. Eventual consistency is widely deployed in distributed systems. It is a consistency model used in distributed computing which expects unexpected often. In large distributed system, there are always various nodes joining and various nodes being removed as they are often using commodity servers. This happens either intentionally or accidentally. Even though one or more nodes are down, it is expected that entire system still functions normally. Applications should be able to do various updates as well as retrieval of the data successfully without any issue. Additionally, this also means that system is expected to return the same updated data anytime from all the functioning nodes. Irrespective of when any node is joining the system, if it is marked to hold some data it should contain the same updated data eventually. As per Wikipedia - Eventual consistency is a consistency model used in distributed computing that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value. In other words -  Informally, if no additional updates are made to a given data item, all reads to that item will eventually return the same value. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Next Quarterly Customer Update Webcast is July 24th (July 25th in Asia Pacific)

    - by R.Hunter
    Join Team Informatics, Kyle Hatlestad from the WebCenter Content “A-Team” and Oracle WebCenter Product Management for the next Oracle WebCenter Quarterly Customer Update Webcast scheduled for July 24th (July 25th in Asia Pacific). Get the latest product management updates and learn more about WebCenter Content and WebCenter Sites. Team Informatics will give an overview of the WebCenter Sites 11g Connector to WebCenter Content and Kyle Hatlestad will discuss best practices for WCC deployment and configuration. You can follow Kyle’s blog at: http://blogs.oracle.com/kyle/ Don't miss out, there will be two live sessions with Q&A. Further details and the registration links for the webcast can be found on our Oracle Technology Network Page.

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >