Search Results

Search found 58478 results on 2340 pages for 'sanitizing data'.

Page 17/2340 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Deploying BAM Data Control Application to WLS server

    - by [email protected]
    var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-15829414-1"); pageTracker._trackPageview(); } catch(err) {} Typically we would test our ADF pages that use BAM Data control using integrated wls server (ADRS). If we have to deploy this same application to a standalone WLS we have to make sure we have the BAM server connection created in WLS.unless we do that we may face runtime errors.In Development mode of WLS(Reference) For development-mode WebLogic Server, you can set the mode to OVERWRITE to test user names and passwords. You can set the mode by running setDomainEnv.cmd or setDomainEnv.sh with the following option added to the command. Add the following to the JAVA_PROPERTIES entry in the <FMW_HOME>/user_projects/domains/<yourdomain>/bin/setDomainEnv.sh file: -Djps.app.credential.overwrite.allowed=true In Production mode of WLS Enable MDS Create and/or Register your MDS repository. For more details refer this Edit adf-config.xml from your application and add the following tag <adf-mds-config xmlns="http://xmlns.oracle.com/adf/mds/config">     <mds-config version="11.1.1.000">     <persistence-config>   <metadata-store-usages>     <metadata-store-usage default-cust-store="true" deploy-target="true" id="myRepos">     </metadata-store-usage>   </metadata-store-usages>   </persistence-config>           </mds-config>  </adf-mds-config>Deploy the application to WLS server after picking the appropriate repository during deployment from the MDS Repository dialog that pops up Enterprise Manager (Use these steps if using a version prior to 11gR1 PS1 release of JDeveloper) Go to EM (http://<host>:<port>/EMIn the left pane, deployments select Application1(your application)In the right pane, top dropdown select "System Mbean Browser->oracle.adf.share.connections->Server: AdminServer->Server: AdminServer->Application:<Appname>->ADFConnections"Right pane click "Operations->CreateConnection"Enter Connection Type as "BAMConnection"Enter the connection name same as the one defined in JdevClick "Invoke"Click "Return"Click on Operation->SaveNow in the ADFConnections in the navigator, select the connection just created and enter all the configuration details.Save and run the page. Enterprise Manager (Use these steps or the steps above if using 11gR1 PS1 or newer) Go to EM (http://<host>:<port>/EMIn the left pane, deployments select Application1(your application)In the right pane, click on "Application Deployment" to invoke to dropdown. In that select "ADF -> Configure ADF Connections"Select Connection Type as "BAM" from the drop downEnter Connection Type as to be the same as the one defined in JDevClick on "Create Connection". This should add a new row below under "BAM Connections"Select the new connection and click on the "Edit" icon. This should bring up a dialogSpecific appropriate values for all connection parameters - Username, password, BAM Server Host, BAM Server Port, Webtier Server Host, Webtier Server Port and BAM Webtier Protocol - and then click on OK to dismiss the dialogClick on "Apply"Run the page page.

    Read the article

  • Folders in SQL Server Data Tools

    - by jamiet
    Recently I have begun a new project in which I am using SQL Server Data Tools (SSDT) and SQL Server Integration Services (SSIS) 2012. Although I have been using SSDT & SSIS fairly extensively while SQL Server 2012 was in the beta phase I usually find that you don’t learn about the capabilities and quirks of new products until you use them on a real project, hence I am hoping I’m going to have a lot of experiences to share on my blog over the coming few weeks. In this first such blog post I want to talk about file and folder organisation in SSDT. The predecessor to SSDT is Visual Studio Database Projects. When one created a new Visual Studio Database Project a folder structure was provided with “Schema Objects” and “Scripts” in the root and a series of subfolders for each schema: Apparently a few customers were not too happy with the tool arbitrarily creating lots of folders in Solution Explorer and hence SSDT has gone in completely the opposite direction; now no folders are created and new objects will get created in the root – it is at your discretion where they get moved to: After using SSDT for a few weeks I can safely say that I preferred the older way because I never used Solution Explorer to navigate my schema objects anyway so it didn’t bother me how many folders it created. Having said that the thought of a single long list of files in Solution Explorer without any folders makes me shudder so on this project I have been manually creating folders in which to organise files and I have tried to mimic the old way as much as possible by creating two folders in the root, one for all schema objects and another for Pre/Post deployment scripts: This works fine until different developers start to build their own different subfolder structures; if you are OCD-inclined like me this is going to grate on you eventually and hence you are going to want to move stuff around so that you have consistent folder structures for each schema and (if you have multiple databases) each project. Moreover new files get created with a filename of the object name + “.sql” and often people like to have an extra identifier in the filename to indicate the object type: The overall point is this – files and folders in your solution are going to change. Some version control systems (VCSs) don’t take kindly to files being moved around or renamed because they recognise the renamed/moved file simply as a new file and when they do that you lose the revision history which, to my mind, is one of the key benefits of using a VCS in the first place. On this project we have been using Team Foundation Server (TFS) and while it pains me to say it (as I am no great fan of TFS’s version control system) it has proved invaluable when dealing with the SSDT problems that I outlined above because it is integrated right into the Visual Studio IDE. Thus the advice from this blog post is: If you are using SSDT consider using an Visual-Studio-integrated VCS that can easily handle file renames and file moves I suspect that fans of other VCSs will counter by saying that their VCS weapon of choice can handle renames/file moves quite satisfactorily and if that’s the case…great…let me know about them in the comments. This blog post is not an attempt to make people use one particular VCS, only to make people aware of this issue that might rise when using SSDT. More to come in the coming few weeks! @jamiet

    Read the article

  • Big Data – Interacting with Hadoop – What is Sqoop? – What is Zookeeper? – Day 17 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Pig and Pig Latin in Big Data Story. In this article we will understand what is Sqoop and Zookeeper in Big Data Story. There are two most important components one should learn when learning about interacting with Hadoop – Sqoop and Zookper. What is Sqoop? Most of the business stores their data in RDBMS as well as other data warehouse solutions. They need a way to move data to the Hadoop system to do various processing and return it back to RDBMS from Hadoop system. The data movement can happen in real time or at various intervals in bulk. We need a tool which can help us move this data from SQL to Hadoop and from Hadoop to SQL. Sqoop (SQL to Hadoop) is such a tool which extract data from non-Hadoop data sources and transform them into the format which Hadoop can use it and later it loads them into HDFS. Essentially it is ETL tool where it Extracts, Transform and Load from SQL to Hadoop. The best part is that it also does extract data from Hadoop and loads them to Non-SQL (or RDBMS) data stores. Essentially, Sqoop is a command line tool which does SQL to Hadoop and Hadoop to SQL. It is a command line interpreter. It creates MapReduce job behinds the scene to import data from an external database to HDFS. It is very effective and easy to learn tool for nonprogrammers. What is Zookeeper? ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. In other words Zookeeper is a replicated synchronization service with eventual consistency. In simpler words – in Hadoop cluster there are many different nodes and one node is master. Let us assume that master node fails due to any reason. In this case, the role of the master node has to be transferred to a different node. The main role of the master node is managing the writers as that task requires persistence in order of writing. In this kind of scenario Zookeeper will assign new master node and make sure that Hadoop cluster performs without any glitch. Zookeeper is the Hadoop’s method of coordinating all the elements of these distributed systems. Here are few of the tasks which Zookeepr is responsible for. Zookeeper manages the entire workflow of starting and stopping various nodes in the Hadoop’s cluster. In Hadoop cluster when any processes need certain configuration to complete the task. Zookeeper makes sure that certain node gets necessary configuration consistently. In case of the master node fails, Zookeepr can assign new master node and make sure cluster works as expected. There many other tasks Zookeeper performance when it is about Hadoop cluster and communication. Basically without the help of Zookeeper it is not possible to design any new fault tolerant distributed application. Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Big Data Analytics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Unable to DELETE Project in Data Quality Projects (DQS)

    - by pinaldave
    Here is the email which made me write this blog post. When I write a blog post I write keeping in mind that if the developer is not familiar with the concept he will attempt this on the development server. If due to any reason you attempt it on any other server than your personal server, developer should make sure to have complete confidence on his own expertise and understand the risk behind it.  Well, let us read the email which I received. I have modified it a bit to remove information related to organizational and individual. “I just read your blog post on Beginning DQS. I went ahead and followed every single screenshot and it worked fine. I was able to execute the DQS project successfully. However, the same blog post got me in trouble – a serious trouble. After first successful deployment I went ahead and created a few of my own knowledge base and projects. I played around a bit and then decided to get back to real work. Now we had deployed DQS on production server only, so experiment on production server. Now, when I got back to my work, I forgot to close all the windows. My manager found the window open and have seen my test projects. He has asked me to delete my experiments immediately and have said words which I cannot write to you. Here is the problem. I am not able to delete the project which I have created earlier. I am able to open it and play with it but the delete option is disabled and grayed out (see attached image). Now I believe there is nothing wrong with this project as it was just a test project. Would you please write to my manager that it is not harmful to leave that project there as it is? It is also not using any resources. I think he will believe you.” As I said this kind of email makes me uncomfortable. I do not want someone to execute anything on production server. I often write notes and disclaimer on my post when something is dangerous to execute on production server. However, if someone is not expert with SQL Server and attempts something new on production server, I think the major issue is here with the person (admin) who gave new developer permission to production server. This has to be carefully avoided. Here was my response to the individual. “I cannot write to your manager anything as he has not asked me anything. Honestly I believe he is correct in his behavior as you should have not executed anything on the production server without prior approval and testing on the development server. Any R&D must be done on local box or development box. I suggest you request your manager to prevent access to users who does not need access. If he is a good manager, he might have already implemented by now recent event. I also see your screenshot. Here is the issue: While you were playing with project, you might have closed the project half the way, without completing it. Due to the same reason it is locked. You can open and continue from the same place where you have left the project. If you do not need the project any more. Right click on it, click on unlock the project. This will enable the DELETE option and now you can delete the project. Next time, be safe out there. It may be dangerous to have admin access to production server when not needed.“ I have yet not heard from him but I believe he will take my words positively. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • Best strategy for synching data in iPhone app

    - by iamj4de
    I am working on a regular iPhone app which pulls data from a server (XML, JSON, etc...), and I'm wondering what is the best way to implement synching data. Criteria are speed (less network data exchange), robustness (data recovery in case update fails), offline access and flexibility (adaptable when the structure of the database changes slightly, like a new column). I know it varies from app to app, but can you guys share some of your strategy/experience? For me, I'm thinking of something like this: 1) Store Last Modified Date in iPhone 2) Upon launching, send a message like getNewData.php?lastModifiedDate=... 3) Server will process and send back only modified data from last time. 4) This data is formatted as so: <+><data id="..."></data></+> // add this to SQLite/CoreData <-><data id="..."></data></-> // remove this <%><data id="..."><attribute>newValue</attribute></data></%> // new modified value I don't want to make <+, <-, <%... for each attribute as well, because it would be too complicated, so probably when receive a <% field, I would just remove the data with the specified id and then add it again (assuming id here is not some automatically auto-incremented field). 5) Once everything is downloaded and updated, I will update the Last Modified Date field. The main problem with this strategy is: If the network goes down when I am updating something = the Last Modified Date is not yet updated = next time I relaunch the app, I will have to go through the same thing again. Not to mention potential inconsistent data. If I use a temporary table for update and make the whole thing atomic, it would work, but then again, if the update is too long (lots of data change), the user has to wait a long time until new data is available. Should I use Last-Modified-Date for each of the data field and update data gradually?

    Read the article

  • Using list() to extract a data.table inside of a function

    - by Nathan VanHoudnos
    I must admit that the data.table J syntax confuses me. I am attempting to use list() to extract a subset of a data.table as a data.table object as described in Section 1.4 of the data.table FAQ, but I can't get this behavior to work inside of a function. An example: require(data.table) ## Setup some test data set.seed(1) test.data <- data.table( X = rnorm(10), Y = rnorm(10), Z = rnorm(10) ) setkey(test.data, X) ## Notice that I can subset the data table easily with literal names test.data[, list(X,Y)] ## X Y ## 1: -0.8356286 -0.62124058 ## 2: -0.8204684 -0.04493361 ## 3: -0.6264538 1.51178117 ## 4: -0.3053884 0.59390132 ## 5: 0.1836433 0.38984324 ## 6: 0.3295078 1.12493092 ## 7: 0.4874291 -0.01619026 ## 8: 0.5757814 0.82122120 ## 9: 0.7383247 0.94383621 ## 10: 1.5952808 -2.21469989 I can even write a function that will return a column of the data.table as a vector when passed the name of a column as a character vector: get.a.vector <- function( my.dt, my.column ) { ## Step 1: Convert my.column to an expression column.exp <- parse(text=my.column) ## Step 2: Return the vector return( my.dt[, eval(column.exp)] ) } get.a.vector( test.data, 'X') ## [1] -0.8356286 -0.8204684 -0.6264538 -0.3053884 0.1836433 0.3295078 ## [7] 0.4874291 0.5757814 0.7383247 1.5952808 But I cannot pull a similar trick for list(). The inline comments are the output from the interactive browser() session. get.a.dt <- function( my.dt, my.column ) { ## Step 1: Convert my.column to an expression column.exp <- parse(text=my.column) ## Step 2: Enter the browser to play around browser() ## Step 3: Verity that a literal X works: my.dt[, list(X)] ## << not shown >> ## Step 4: Attempt to evaluate the parsed experssion my.dt[, list( eval(column.exp)] ## Error in `rownames<-`(`*tmp*`, value = paste(format(rn, right = TRUE), (from data.table.example.R@1032mCJ#7) : ## length of 'dimnames' [1] not equal to array extent return( my.dt[, list(eval(column.exp))] ) } get.a.dt( test.data, "X" ) What am I missing? Update: Due to some confusion as to why I would want to do this I wanted to clarify. My use case is when I need to access a data.table column when when I generate the name. Something like this: set.seed(2) test.data[, X.1 := rnorm(10)] which.column <- 'X' new.column <- paste(which.column, '.1', sep="") get.a.dt( test.data, new.column ) Hopefully that helps.

    Read the article

  • jQuery Templates and Data Linking (and Microsoft contributing to jQuery)

    - by ScottGu
    The jQuery library has a passionate community of developers, and it is now the most widely used JavaScript library on the web today. Two years ago I announced that Microsoft would begin offering product support for jQuery, and that we’d be including it in new versions of Visual Studio going forward. By default, when you create new ASP.NET Web Forms and ASP.NET MVC projects with VS 2010 you’ll find jQuery automatically added to your project. A few weeks ago during my second keynote at the MIX 2010 conference I announced that Microsoft would also begin contributing to the jQuery project.  During the talk, John Resig -- the creator of the jQuery library and leader of the jQuery developer team – talked a little about our participation and discussed an early prototype of a new client templating API for jQuery. In this blog post, I’m going to talk a little about how my team is starting to contribute to the jQuery project, and discuss some of the specific features that we are working on such as client-side templating and data linking (data-binding). Contributing to jQuery jQuery has a fantastic developer community, and a very open way to propose suggestions and make contributions.  Microsoft is following the same process to contribute to jQuery as any other member of the community. As an example, when working with the jQuery community to improve support for templating to jQuery my team followed the following steps: We created a proposal for templating and posted the proposal to the jQuery developer forum (http://forum.jquery.com/topic/jquery-templates-proposal and http://forum.jquery.com/topic/templating-syntax ). After receiving feedback on the forums, the jQuery team created a prototype for templating and posted the prototype at the Github code repository (http://github.com/jquery/jquery-tmpl ). We iterated on the prototype, creating a new fork on Github of the templating prototype, to suggest design improvements. Several other members of the community also provided design feedback by forking the templating code. There has been an amazing amount of participation by the jQuery community in response to the original templating proposal (over 100 posts in the jQuery forum), and the design of the templating proposal has evolved significantly based on community feedback. The jQuery team is the ultimate determiner on what happens with the templating proposal – they might include it in jQuery core, or make it an official plugin, or reject it entirely.  My team is excited to be able to participate in the open source process, and make suggestions and contributions the same way as any other member of the community. jQuery Template Support Client-side templates enable jQuery developers to easily generate and render HTML UI on the client.  Templates support a simple syntax that enables either developers or designers to declaratively specify the HTML they want to generate.  Developers can then programmatically invoke the templates on the client, and pass JavaScript objects to them to make the content rendered completely data driven.  These JavaScript objects can optionally be based on data retrieved from a server. Because the jQuery templating proposal is still evolving in response to community feedback, the final version might look very different than the version below. This blog post gives you a sense of how you can try out and use templating as it exists today (you can download the prototype by the jQuery core team at http://github.com/jquery/jquery-tmpl or the latest submission from my team at http://github.com/nje/jquery-tmpl).  jQuery Client Templates You create client-side jQuery templates by embedding content within a <script type="text/html"> tag.  For example, the HTML below contains a <div> template container, as well as a client-side jQuery “contactTemplate” template (within the <script type="text/html"> element) that can be used to dynamically display a list of contacts: The {{= name }} and {{= phone }} expressions are used within the contact template above to display the names and phone numbers of “contact” objects passed to the template. We can use the template to display either an array of JavaScript objects or a single object. The JavaScript code below demonstrates how you can render a JavaScript array of “contact” object using the above template. The render() method renders the data into a string and appends the string to the “contactContainer” DIV element: When the page is loaded, the list of contacts is rendered by the template.  All of this template rendering is happening on the client-side within the browser:   Templating Commands and Conditional Display Logic The current templating proposal supports a small set of template commands - including if, else, and each statements. The number of template commands was deliberately kept small to encourage people to place more complicated logic outside of their templates. Even this small set of template commands is very useful though. Imagine, for example, that each contact can have zero or more phone numbers. The contacts could be represented by the JavaScript array below: The template below demonstrates how you can use the if and each template commands to conditionally display and loop the phone numbers for each contact: If a contact has one or more phone numbers then each of the phone numbers is displayed by iterating through the phone numbers with the each template command: The jQuery team designed the template commands so that they are extensible. If you have a need for a new template command then you can easily add new template commands to the default set of commands. Support for Client Data-Linking The ASP.NET team recently submitted another proposal and prototype to the jQuery forums (http://forum.jquery.com/topic/proposal-for-adding-data-linking-to-jquery). This proposal describes a new feature named data linking. Data Linking enables you to link a property of one object to a property of another object - so that when one property changes the other property changes.  Data linking enables you to easily keep your UI and data objects synchronized within a page. If you are familiar with the concept of data-binding then you will be familiar with data linking (in the proposal, we call the feature data linking because jQuery already includes a bind() method that has nothing to do with data-binding). Imagine, for example, that you have a page with the following HTML <input> elements: The following JavaScript code links the two INPUT elements above to the properties of a JavaScript “contact” object that has a “name” and “phone” property: When you execute this code, the value of the first INPUT element (#name) is set to the value of the contact name property, and the value of the second INPUT element (#phone) is set to the value of the contact phone property. The properties of the contact object and the properties of the INPUT elements are also linked – so that changes to one are also reflected in the other. Because the contact object is linked to the INPUT element, when you request the page, the values of the contact properties are displayed: More interesting, the values of the linked INPUT elements will change automatically whenever you update the properties of the contact object they are linked to. For example, we could programmatically modify the properties of the “contact” object using the jQuery attr() method like below: Because our two INPUT elements are linked to the “contact” object, the INPUT element values will be updated automatically (without us having to write any code to modify the UI elements): Note that we updated the contact object above using the jQuery attr() method. In order for data linking to work, you must use jQuery methods to modify the property values. Two Way Linking The linkBoth() method enables two-way data linking. The contact object and INPUT elements are linked in both directions. When you modify the value of the INPUT element, the contact object is also updated automatically. For example, the following code adds a client-side JavaScript click handler to an HTML button element. When you click the button, the property values of the contact object are displayed using an alert() dialog: The following demonstrates what happens when you change the value of the Name INPUT element and click the Save button. Notice that the name property of the “contact” object that the INPUT element was linked to was updated automatically: The above example is obviously trivially simple.  Instead of displaying the new values of the contact object with a JavaScript alert, you can imagine instead calling a web-service to save the object to a database. The benefit of data linking is that it enables you to focus on your data and frees you from the mechanics of keeping your UI and data in sync. Converters The current data linking proposal also supports a feature called converters. A converter enables you to easily convert the value of a property during data linking. For example, imagine that you want to represent phone numbers in a standard way with the “contact” object phone property. In particular, you don’t want to include special characters such as ()- in the phone number - instead you only want digits and nothing else. In that case, you can wire-up a converter to convert the value of an INPUT element into this format using the code below: Notice above how a converter function is being passed to the linkFrom() method used to link the phone property of the “contact” object with the value of the phone INPUT element. This convertor function strips any non-numeric characters from the INPUT element before updating the phone property.  Now, if you enter the phone number (206) 555-9999 into the phone input field then the value 2065559999 is assigned to the phone property of the contact object: You can also use a converter in the opposite direction also. For example, you can apply a standard phone format string when displaying a phone number from a phone property. Combining Templating and Data Linking Our goal in submitting these two proposals for templating and data linking is to make it easier to work with data when building websites and applications with jQuery. Templating makes it easier to display a list of database records retrieved from a database through an Ajax call. Data linking makes it easier to keep the data and user interface in sync for update scenarios. Currently, we are working on an extension of the data linking proposal to support declarative data linking. We want to make it easy to take advantage of data linking when using a template to display data. For example, imagine that you are using the following template to display an array of product objects: Notice the {{link name}} and {{link price}} expressions. These expressions enable declarative data linking between the SPAN elements and properties of the product objects. The current jQuery templating prototype supports extending its syntax with custom template commands. In this case, we are extending the default templating syntax with a custom template command named “link”. The benefit of using data linking with the above template is that the SPAN elements will be automatically updated whenever the underlying “product” data is updated.  Declarative data linking also makes it easier to create edit and insert forms. For example, you could create a form for editing a product by using declarative data linking like this: Whenever you change the value of the INPUT elements in a template that uses declarative data linking, the underlying JavaScript data object is automatically updated. Instead of needing to write code to scrape the HTML form to get updated values, you can instead work with the underlying data directly – making your client-side code much cleaner and simpler. Downloading Working Code Examples of the Above Scenarios You can download this .zip file to get with working code examples of the above scenarios.  The .zip file includes 4 static HTML page: Listing1_Templating.htm – Illustrates basic templating. Listing2_TemplatingConditionals.htm – Illustrates templating with the use of the if and each template commands. Listing3_DataLinking.htm – Illustrates data linking. Listing4_Converters.htm – Illustrates using a converter with data linking. You can un-zip the file to the file-system and then run each page to see the concepts in action. Summary We are excited to be able to begin participating within the open-source jQuery project.  We’ve received lots of encouraging feedback in response to our first two proposals, and we will continue to actively contribute going forward.  These features will hopefully make it easier for all developers (including ASP.NET developers) to build great Ajax applications. Hope this helps, Scott P.S. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu]

    Read the article

  • Why use a whitelist for HTML sanitizing?

    - by Carson Myers
    I've often wondered -- why use a whitelist as opposed to a blacklist when sanitizing HTML input? How many sneaky HTML tricks are there to open XSS vulnerabilities? Obviously script tags and frames are not allowed, and a whitelist would be used on the fields in HTML elements, but why disallow most of everything?

    Read the article

  • WCF/ADO.NET Data Services - Could not load type 'System.Data.Services.Providers.IDataServiceUpdatePr

    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). This feed URL has been discontinued. Please update your reader's URL to : http://feeds.feedburner.com/winsmarts Read full article .... ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Data, Log and Temp file placement

    - by jchang
    First especially for all the people with SAN storage, drive letters are of no consequence. What matters is the actual physical disk layout. Forget capacity, pay attention to the number of spindles supporting each RAID group. If the RAID group is shared with other application, make sure there the SLA guarantees read and write latency. One very large company conducted a stress test in the QA environment. The SAN admin carved the LUNs from the same pool of disks as production, but thought he had a really...(read more)

    Read the article

  • Live from ODTUG - Big Data and SQL session #2

    - by Jean-Pierre Dijcks
    Sitting in Dominic Delmolino's session at ODTUG (KScope 12). If the session count at conferences is any indication then we will see more and more people start to deploy MapReduce in the database. And yes, that would be with SQL and PL/SQL first and foremost. Both Dominic and our own Bryn Llewellyn are doing MapReduce in the database presentations.  Since I have seen both, I would advice people to first look through Dominic's session to get a good grasp on what mappers do and what reducers do, then dive into Bryn's for a bunch of PL/SQL example. The thing I like about Dominic's is the last slide (a recursive WITH statement) to do this in SQL... Now I am hoping that next year we will see tools vendors show off how they work with Hadoop and MapReduce (at least talking about the concepts!!).

    Read the article

  • How can i move towards the Business intelliegnce/ data mining fields from software developer

    - by user1758043
    I am working as python developer and i work with djnago. I also do some web scrapping and building spiders and bots. Now from there i want to make my move to Business intelligence. I just want to know how can i move into that field. because as companies are not going to hire me in that field directly , i just want to know how can i make transistions. I was thinking of first work as Database developer in sql and then i can see futher. But i want from you guys so that i can start learning that stuff so that i can chnage jobs keeping that in mind. here in my area there are plent of jobs in all area but i need to know hoe to transitio and what thing i should learn before making that transition. Here JObs are plenty so if i know my stuff , getting job is piece of cake becaus ethey don't ahve any persons. same jobs keep getting advertised for months and months

    Read the article

  • Better data structure for a game like Bubble Witch

    - by CrociDB
    I'm implementing a bubble-witch-like game (http://www.king.com/games/puzzle-games/bubble-witch/), and I was thinking on what's the better way to store the "bubbles" and to work with. I thought of using graphs, but that might be too complex for a trivial thing. Thought of a matrix, just like a tile map, but that might get too 'workaroundy'. I don't know. I'll be doing in Flash/AS3, though. Thanks. :)

    Read the article

  • How can I move towards the Business Intelligence/ data mining fields from software developer [closed]

    - by user1758043
    I am working as a Python developer and I work with django. I also do some web scraping and building spiders and bots. Now from there I want to make my move to Business Intelligence. I just want to know how I can move into that field. Because as companies are not going to hire me in that field directly, I just want to know how can I make the transistion. I was thinking of first working as Database developer in SQL and then I can see further. But I want advice from you guys so that I can start learning that stuff so that I can change jobs keeping that in mind. Here in my area there are plenty of jobs in all areas but I need to know how to transition and what things I should learn before making that transition. Here jobs are plenty so if I know my stuff, getting a job is a piece of cake because they don't have any people. Same jobs keep getting advertised for months and months.

    Read the article

  • map data structure in pacman

    - by Sam Fisher
    i am trying to make a pacman game in c# using GDI+, i have done some basic work and i have previously replicated games like copter-it and minesweeper. but i am confused about how do i implement the map in pacman, i mean which datastructure to use, so i can use it for moving AI controlled objects and check collisions with walls. i thought of a 2d array of ints but that didnt make sense to me. looking for some help. thanks.

    Read the article

  • How to retrieve data from a corruped volume

    - by explorex
    Hi, My Ubuntu 10.10 just crashed(probably due to hardware error and in the end I was getting error like Unknown filesystem ..... grub> .. GRUB console before i could take some action) and i reinstalled the same version form USB stick. I had ubuntu installed in ext4 file system and I am also having the same filesystem in the same hard disk on different drive. When I try to access my previous filesystem, i get error Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sda6, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so I had some important files in the previous volume, I don't know how to retrieve them. And what are the chances that I would get the same outcome (hardware error)? Please help me!

    Read the article

  • Wiped data, and duplicated folders into files.

    - by Kaustubh P
    Something weird happened today, and I dont know how. Within a folder, all folders have a file by the same name, with a colon appended to it. And all the files from the most inner-most directory in my home, have been dumped to ~, with a size of 0 bytes. I have not executed any scripts or anything. I was just checking out some easter eggs, namely the gegls from outer space and free the fish and was away from the computer and was logged because of the screensaver. I couldnt log-back in with my password, so I just reset the PC, and while booting, the PC went into a drive check. BUT, IIRC, i saw the duplicate "folder files" before I had logged out, so thats not the reason! All the files have a timestamp of 14 Jan. Also, the contents of my eclipse folder have been dumped into ~. Right down to the jars and ini files. HELP!

    Read the article

  • Create association between informations

    - by Andrea Girardi
    I deployed a project some days ago that allow to extract some medical articles using the results of a questionnaire completed by a user. For instance, if I reply on questionnaire I'm affected by Diabetes type 2 and I'm a smoker, my algorithm extracts all articles related to diabetes bubbling up all articles contains information about Diabetes type 2 and smoking. Basically we created a list of topic and, for every topic we define a kind of "guideline" that allows to extract and order informations for a user. I'm quite sure there are some better way to put on relationship two content but I was not able to find them on network. Could you suggest my a model, algorithm or paper to better understand this kind of problem and that helps me to find a faster, and more accurate way to extract information for an user?

    Read the article

  • recover data in linux removed with rm

    - by user3717896
    Today i deleted my home directory (in wrong action) with: sudo rm -rf * And when i use extundelete i get this message: root@ubuntu:~# sudo extundelete --restore-directory /home/hamed/ /dev/sda2WARNING: Extended attributes are not restored. Loading filesystem metadata ... 746 groups loaded. Loading journal descriptors ... 29931 descriptors loaded. Searching for recoverable inodes in directory /home/hamed/ ... 498 recoverable inodes found. Looking through the directory structure for deleted files ... 498 recoverable inodes still lost. No files were undeleted. why it can't recover? Anyone can help me to return my Desktop, Documents and etc? I have ubuntu 14.04.

    Read the article

  • Trying to recover deleted Ubuntu partition

    - by user110984
    I made a mistake in logging into my 200 GB Ubuntu partition. I could not access Grub after that. Using a live CD I then ran Boot_Repair and apparently deleted the partition, I guess because I ran it from my 70 GB Windows partition. I can send the results of boot_info before that and of Boot_Repair. Then I ran TestDisk, which apparently found only dev/sda/ -320GB / 298 / GiB - WDC - WD3200BEVT-22A23T0 (Was there any more I could have done with TestDisk? I looked at the TestDisk_Step_By_Step example and found no way forward given that no other partitions turned up) I have run gpart and found this: /sda1 - 15 GB /sda2 - system reserved /sda3 - 70.15 GB /sda4 - extended 212.84 unallocated - 209.10 /sda5 - unknown 3.74 . I have been told I can recover the partition using gparted's Rescue start end command, but I don't know what to enter for start and end. [--EDIT: TestDisk Deeper Search stated that "the following partitions can't be recovered" and listed a 220-GB Linux partition 6 times. Then it stated that "The current number of heads per cylinder is 255 but the correct value may be 128" and I could try to change it in the Geometry menu (because apparently these are overlapping partitions) So should I do that?--]

    Read the article

  • How to retrieve data from a corrupted volume

    - by explorex
    Hi, My Ubuntu 10.10 just crashed, probably due to hardware error (and in the end I was getting errors like Unknown filesystem ..... grub> .., and it went to the GRUB console before I could take any other action). I reinstalled the same version from a USB stick. I had Ubuntu installed with the ext4 file system and I also have the same filesystem in the same hard disk on a different drive. When I try to access my previous filesystem, I get errors: Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sda6, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so I had some important files in the previous volume ; I don't know how to retrieve them. And what are the chances that I would get the same outcome (hardware error)? Please help me!

    Read the article

  • Master Data Management for Product Data

    In this AppsCast, Hardeep Gulati, VP PLM and PIM Product Strategy discusses the benefits companies are getting from Product MDM, more details about Oracle Product Hub solution and the progress, and where we are going from here.

    Read the article

  • What is meant by a primitive data type?

    - by Appy
    My understanding of a primitive datatype is that It is a datatype provided by a language implicitly (Others are user defined classes) So different languages have different sets of datatypes which are considered primitive for that particular language. Is that right? And what is the difference between a "basic datatype" and "built-in datatype". Wikipedia says a primitive datatype is either of the two. PS - Why is "string" type considered as a primitive type in SNOBOL4 and not in Java ?

    Read the article

  • In Search Data Structure And Algorithm Project Title Based on Topic

    - by Salehin Suhaimi
    As the title says, my lecturer gave me a project that i needed to finish in 3 weeks before final semester exams. So i thought i will start now. The requirement is to "build a simple program that has GUI based on all the chapter that we've learned." But i got stuck on WHAT program should i build. Any idea a program that is related to this chapter i've learned? Any input will help. list, array list, linked list, vectors, stacks, Queues, ADT, Hashing, Binary Search Tree, AVL Tree, That's about all i can remember. Any idea where can i start looking?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >