Search Results

Search found 10391 results on 416 pages for 'sys dm exec requests'.

Page 122/416 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Oracle & OAUG PO SIG's Procurement Executive Workshop - Burlington, MA April 29th, 2011

    - by david.hope-ross(at)oracle.com
    OAUG PO SIG and Oracle invite you to a day of learning and networking with your Boston area procurement peers. This event is focused on facilitating discussion among procurement executives, promoting best practices from leading customers, and sharing the vision that is driving enhancements to E-Business Suite procurement. OAUG PO SIG members and Oracle will share practical advice that improves technology adoption and lowers risk. Topics of interest include supplier management, upgrades, cloud-based deployment, as well as spend classification and analytics. For more information and registration please visit http://www.oracle.com/us/dm/h2fy11/68745-nafm10012033mpp102-se-334896.html.

    Read the article

  • Should one always know what an API is doing just by looking at the code?

    - by markmnl
    Recently I have been developing my own API and with that invested interest in API design I have been keenly interested how I can improve my API design. One aspect that has come up a couple times is (not by users of my API but in my observing discussion about the topic): one should know just by looking at the code calling the API what it is doing. For example see this discussion on GitHub for the discourse repo, it goes something like: foo.update_pinned(true, true); Just by looking at the code (without knowing the parameter names, documentation etc.) one cannot guess what it is going to do - what does the 2nd argument mean? The suggested improvement is to have something like: foo.pin() foo.unpin() foo.pin_globally() And that clears things up (the 2nd arg was whether to pin foo globally, I am guessing), and I agree in this case the later would certainly be an improvement. However I believe there can be instances where methods to set different but logically related state would be better exposed as one method call rather than separate ones, even though you would not know what it is doing just by looking at the code. (So you would have to resort to looking at the parameter names and documentation to find out - which personally I would always do no matter what if I am unfamiliar with an API). For example I expose one method SetVisibility(bool, string, bool) on a FalconPeer and I acknowledge just looking at the line: falconPeer.SetVisibility(true, "aerw3", true); You would have no idea what it is doing. It is setting 3 different values that control the "visibility" of the falconPeer in the logical sense: accept join requests, only with password and reply to discovery requests. Splitting this out into 3 method calls could lead to a user of the API to set one aspect of "visibility" forgetting to set others that I force them to think about by only exposing the one method to set all aspects of "visibility". Furthermore when the user wants to change one aspect they almost always will want to change another aspect and can now do so in one call.

    Read the article

  • New database profiling support in ANTS Performance Profiler

    - by Ben Emmett
    In May last year, the ANTS Performance Profiler team added the ability to profile database requests your application makes to SQL Server or Oracle. The really cool thing is that you’re shown those requests in the application’s call tree, so you can see what .NET code caused those queries to run. It’s particularly helpful if you’re using an ORM which automagically generates and runs queries for you, but which doesn’t necessarily do it in the most efficient way possible. Now by popular demand, we’ve added support for profiling MySQL (or MariaDB) and PostgreSQL, so you can see queries run against those databases too. Some of you have also said that you’re using the Devart dotConnect data providers instead of the native .NET ones, so we’ve added support for those drivers too. Hope it helps! For the record, here’s a list of supported connectors (ones in bold are new): SQL Server .NET Framework Data Provider Devart dotConnect for SQL Server Oracle .NET Framework Data Provider Oracle Data Provider for .NET Devart dotConnect for Oracle MySQL / MariaDB MySQL Connector/Net Devart dotConnect for MySQL PostgreSQL Npgsql .NET Data Provider for PostgreSQL Devart dotConnect for PostgreSQL SQL Server Compact Edition .NET Framework Data Provider for SQL Server Compact Edition Devart dotConnect for SQL Server Pro Have we missed a connector or database which you’d find useful? Tell us about it in the comments or by emailing [email protected]. Ben

    Read the article

  • How to interpret number of URL errors in Google webmaster tools

    - by user359650
    Recently Google has made some changes to Webmaster tools which are explained below: http://googlewebmastercentral.blogspot.com/2012/03/crawl-errors-next-generation.html One thing I could not find out is how to interpret the number of errors over time. At the end of February we've recently migrated our website and didn't implement redirect rules for some pages (quite a few actually). Here is what we're getting from the Crawl errors: What I don't know is if the number of errors is cumulative over time or not (i.e. if Google bots crawl your website on 2 different days and find 1 separate issue on each day, whether they will report 1 error for each day, or 1 for the 1st, and 2 for the 2nd). Based on the Crawl stats we can see that the number of requests made by Google bots doesn't increase: Therefore I believe the number of errors reported is cumulative and that an error detected on 1 day is taken into account and reported on the subsequent days until the underlying problem is fixed and the page it's crawled again (or if you manually Mark as fixed the error) because if you don't make more requests to a website, there is no way you can check new pages and old pages at the same time. Q: Am I interpreting the number of errors correctly?

    Read the article

  • DAO/Webservice Consumption in Web Application

    - by Gavin
    I am currently working on converting a "legacy" web-based (Coldfusion) application from single data source (MSSQL database) to multi-tier OOP. In my current system there is a read/write database with all the usual stuff and additional "read-only" databases that are exported daily/hourly from an Enterprise Resource Planning (ERP) system by SSIS jobs with business product/item and manufacturing/SCM planning data. The reason I have the opportunity and need to convert to multi-tier OOP is a newer more modern ERP system is being implemented business wide that will be a complete replacement. This newer ERP system offers several interfaces for third party applications like mine, from direct SQL access to either a dotNet web-service or a SOAP-like web-service. I have found several suitable frameworks I would be happy to use (Coldspring, FW/1) but I am not sure what design patterns apply to my data access object/component and how to manage the connection/session tokens, with this background, my question has the following three parts: Firstly I have concerns with moving from the relative safety of a SSIS job that protects me from downtime and speed of the ERP system to directly connecting with one of the web services which I note seem significantly slower than I expected (simple/small requests often take up to a whole second). Are there any design patterns I can investigate/use to cache/protect my data tier? It is my understanding data access objects (the component that connects directly with the web services and convert them into the data types I can then work with in my Domain Objects) should be singletons (and will act as an Adapter/Facade), am I correct? As part of the data access object I have to setup a connection by username/password (I could set up multiple users and/or connect multiple times with this) which responds with a session token that needs to be provided on every subsequent request. Do I do this once and share it across the whole application, do I setup a new "connection" for every user of my application and keep the token in their session scope (might quickly hit licensing limits), do I set the "connection" up per page request, or is there a design pattern I am missing that can manage multiple "connections" where a requests/access uses the first free "connection"? It is worth noting if the ERP system dies I will need to reset/invalidate all the connections and start from scratch, and depending on which web-service I use might need manually close the "connection/session"

    Read the article

  • Staying Ahead of the Curve - Deloitte's 2012 Human Capital Trends Webcast | June 13th

    - by Jay Richey, HCM Product Marketing
    Businesses today are calling on HR to leap ahead and help to manage change in the face of complex challenges that touch so many parts of the enterprise. This webinar will provide an overview of eight major Human Capital Trends surfacing in 2012. Understanding the trends — what they mean for both leading HR and for leading the business — is an opportunity for organizations to be proactive and stay ahead of the curve. June 13, 2012 12:00 p.m. – 2:00 p.m. CT Online Featured Speakers: Michael Gretczko Principal, Deloitte Consulting LLP, Human Capital Practice Dan Helfrich Principal, Deloitte Consulting LLP, Federal Human Capital Practice Leader Greg Vert Senior Consultant, Deloitte Consulting Evite & Registration:  http://www.oracle.com/us/dm/75810-wwmk11040178mpp035c007-oem-1633667.html

    Read the article

  • Design pattern for client/server sessions?

    - by nonot1
    Are there any common patterns or general guidance I can learn from for how to design a client/server system where the both the client and server must maintain some kind per-client session state? I've found any number of libraries that can help with some of the plumbing, but it's the overall design I'm wondering about. Open issues in my mind: How to structure the client/server communication so that bidirectional synchronous and asynchronous requests are possible? The server side needs to spawn a couple of per-connected-client session-long helper process. How to manage that? How to manage the mapping from a given client (and any of it's requests) to server state and helper process instances in the face of multiple clients and intermittent network connectivity. Most communication can be simple blocking request/reply, but some will be long running processing tasks that the client will want to keep tabs on. To the extent that it matters, the platform is Linux/C/C++. Not web based. Just an existing thick-client software app being modified to talk to backend servers for some tasks.

    Read the article

  • The JavaServer Faces 2.2 viewAction Component

    - by Janice J. Heiss
    Life just got easier for users of JavaServer Faces. In a new article, now up on otn/java, titled “New JavaServer Faces 2.2 Feature: The viewAction Component,” Tom McGinn, Oracle’s Principal Curriculum Developer for Oracle Server Technologies, explores the advantages offered by the JavaServer Faces 2.2 view action feature, which, according to McGinn, “simplifies the process for performing conditional checks on initial and postback requests, enables control over which phase of the lifecycle an action is performed in, and enables both implicit and declarative navigation.”As McGinn observes: “A view action operates like a button command (UICommand) component. By default, it is executed during the Invoke Application phase in response to an initial request. However, as you'll see, view actions can be invoked during any phase of the lifecycle and, optionally, during postback, making view actions well suited to performing preview checks.”McGinn explains that the JavaServer Faces 2.2 view action feature offers several advantages over the previous method of performing evaluations before a page is rendered:   * View actions can be triggered early on, before a full component tree is built, resulting in a lighter weight call.   * View action timing can be controlled.   * View actions can be used in the same context as the GET request.   * View actions support both implicit and explicit navigation.   * View actions support both non-faces (initial) and faces (postback) requests.Read the complete article here.

    Read the article

  • Authentication system brainstorm

    - by gansbrest
    Hi. We got multiple small websites (microsites) and one main high traffic one with big users base. Right now the requirement is to build authentication system which should allow users to loign with the same identity across the network. All website are running on different domains, powered by Drupal 6 CMS and have separate databases (so sharing tables with prefix is not an option + it creates a huge mess in the db). Here is the set of core requirements I came up with: Users should be able to login with the same credentials to all sites within the network User’s data sharing between Main site (storage) and all micro sites within the network Data synchronization across the network when user changes the data (update email or password for example) The login/registration process should be seamless and consistent Register on any of the sites across the network and use that identity to login later on. In the future there might be a need to add openid authentication options. Basically we are looking at something similar stackexchange does, but not sure if they have central users base on not. I was thinking about custom solution which will include 2 parts (modules), one will be stored on the Main site for users data storing and responding to requests from clients. Second part (module) will be placed on each microsite, which is going to send requests to the Master. Some kind of client - server setup. One of the complications I see right away is #3. Data Synhcronization across the network. I just don't want to reinvent the wheel and maybe some work is already done in this direction. Looking forward to your ideas on how to approach this project. EDIT: We use MySQL database

    Read the article

  • Issue tracking multiple domains with Google Analytics

    - by user359650
    I have 2 domains mydomain.com and mydomain.net which I'm trying to track with the same GA code. Here are the options I turned on: Subdomains of mydomain ON Examples: www.mydomain.com -and- apps.mydomain.com -and- store.mydomain.com Multiple top-level domains of mydomain ON Examples: mydomain.uk -and- mydomain.cn -and- mydomain.fr Which gave me the following code: _gaq.push(['_setAccount', 'UA-123456789-1']); _gaq.push(['_setDomainName', 'mydomain.com']); _gaq.push(['_setAllowLinker', true]); _gaq.push(['_trackPageview']); In this help page I read that _setDomainName must be changed for each domain which I did: -if you go to mydomain.net you get _gaq.push(['_setDomainName', 'mydomain.net']); -if you go to mydomain.com you get _gaq.push(['_setDomainName', 'mydomain.com']); When I generate traffic on both mydomain.dom and mydomain.net and watches GA push requests made with firebug I can see requests generated for both domains and the parameter called utmhn has the proper domain value (which matches that of _setDomainName and the browser address bar). However when I monitor the realtime statistics under Home->Real-Time->Overview I see pageviews for mydomain.net BUT NOT for mydomain.dom :( What am I missing to properly track both domains? PS: in the help page I mentioned they talk about setting up cross links which I didn't do for now as my understanding is that it shouldn't be needed to get what I'm trying to do to work. Also I want to mention that I do not have any tracking code for any of these 2 domains other than the one I mentioned.

    Read the article

  • Oracle Traffic Director – download and check out new cool features in 11.1.1.7.0 by Frances Zhao

    - by JuergenKress
    As Oracle's strategic layer-7 software load balancer product, Oracle Traffic Direct is fast, reliable, secure, easy-to-use and scalable; that you can deploy as the reliable entry point for all TCP, HTTP and HTTPS traffic to application servers and web servers in your network. The latest release Oracle Traffic Director 11.1.1.7.0 is available for ExaLogic and Database Appliance! For download and details please visit the Traffic Director OTN website. It this release, we have introduced some major new functionality and improvements. Web application firewall. Oracle Traffic Director supports web application firewalls. A web application firewall (WAF) is a filter or server plugin that applies a set of rules, called rule sets, to an HTTP request. Using a web application firewall, users can inspect traffic and deny requests to protect back-end applications from CSRF vulnerabilities and common attacks such as cross-site scripting. WebSocket Connections. Oracle Traffic Director handles WebSocket connections by default. WebSocket connections are long-lived and allow support for live content, games in real-time, video chatting, and so on. Support for LDAP/T3 Load Balancing. Oracle Traffic Director now supports basic LDAP/T3 load balancing at layer 7, where requests are handled as generic TCP connections for traffic tunneling. It works in full-NAT mode. Please download and try it out. For more information, check out the data sheet and the documentation. For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: traffic director,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • mounting external hard drive EXT4: "the unlocked device does not have a reckognizable filesystem on it"?

    - by user824924
    I'm having problems mounting ext4 partitions(inside a LUKS partition) in external drives. The drives are fine, there is no problem whatsoever with the drives and no filesystem corruption. This happened since a recent automatic system upgrade, and a manual upgrade to kernel 3.12.0. It goes like this: I plug in the external drive Passphrase is asked for luks device luks partition correctly unlocked/opened Instead of proceding with mounting the now exposed ext4 partition there's a pop-up saying: "the unlocked device does not have a recognizable filesystem on it". Same happens in this case: $ gvfs-mount -d /dev/sdc2 Enter a passphrase to unlock the volume The passphrase is needed to access encrypted data on WDC WD250... (250 GB Hard Disk). Password: Error mounting /dev/sdc2: The unlocked device does not have a recognizable file system on it Doing a manual sudo mount /dev/dm-1 /mnt/testfolder works with no errors and there is no problem with the filesystem (fscked). Also there doesn’t seem to be anything useful written to dmesg when this happens. What gives?

    Read the article

  • Webcast: Introduction To Causal Factors

    - by ChristineS-Oracle
    Webcast: Introduction To Causal Factors Date: June 11, 2014 at 11:00 am ET, 10:00 am CT, 9:00 am MT, 8:00 am PT, 8:30 pm, India Time (Mumbai, GMT+05:30) This one hour advisor webcast will provide an introduction to causal factors for Demand Management and AFDM. Pre-seeded causal factors will be discussed as well as when they are not appropriate. Scenarios of when to add causal factors will be covered and best practice method of adding and using. Topics will include: Causal factors in DM and AFDM Pre-seeded causal factors When to modify causal factor settings Best practice when working with causal factors Details & Registration: Doc ID 1664606.1

    Read the article

  • Solaris Day in NY and Boston

    - by unixman
    Hey all, -- We're hosting yet another Solaris event in New York -- this one will be on November 29th and focused on some key in-depth technologies in Solaris 11, which had just been released earlier this month.  Speakers include Dave Miner, Glenn Brunette and Jeff Victor.  It starts in the morning and goes through lunch; check out the agenda from the below link. Topics include: new and improved installation and package management experience, virtualization, ZFS and security.Please check it out and come join us! The RSVP link is belowhttp://www.oracle.com/go/?&Src=7239490&Act=34&pcode=NAFM10128512MPP016 Additionally, if you are in the Boston area, an identical event will be held in Burlington the following day, on November 30th. The RSVP link for that is http://www.oracle.com/us/dm/h2fy11/21285-nafm10128512mpp013-oem-525338.html Hope to see you there!

    Read the article

  • Can Clojure's thread-based agents handle c10k performance?

    - by elliot42
    I'm writing a c10k-style service and am trying to evaluate Clojure's performance. Can Clojure agents handle this scale of concurrency with its thread-based agents? Other high performance systems seem to be moving towards async-IO/events/greenlets, albeit at a seemingly higher complexity cost. Suppose there are 10,000 clients connected, sending messages that should be appended to 1,000 local files--the Clojure service is trying to write to as many files in parallel as it can, while not letting any two separate requests mangle the same single file by writing at the same time. Clojure agents are extremely elegant conceptually--they would allow separate files to be written independently and asynchronously, while serializing (in the database sense) multiple requests to write to the same file. My understanding is that agents work by starting a thread for each operation (assume we are IO-bound and using send-off)--so in this case is it correct that it would start 1,000+ threads? Can current-day systems handle this number of threads efficiently? Most of them should be IO-bound and sleeping most of the time, but I presume there would still be a context-switching penalty that is theoretically higher than async-IO/event-based systems (e.g. Erlang, Go, node.js). If the Clojure solution can handle the performance, it seems like the most elegant thing to code. However if it can't handle the performance then something like Erlang or Go's lightweight processes might be preferable, since they are designed to have tens of thousands of them spawned at once, and are only moderately more complex to implement. Has anyone approached this problem in Clojure or compared to these other platforms? (Thanks for your thoughts!)

    Read the article

  • How do you manage a complexity jump?

    - by glenatron
    It seems an infrequent but common experience that sometimes you're working on a project and suddenly something turns up unexpectedly, throws a massive spanner in the works and ramps up the complexity a whole lot. For example, I was working on an application that talked to SOAP services on various other machines. I whipped up a prototype that worked fine, then went on to develop a regular front end and generally get everything up and running in a nice, fairly simple and easy to follow fashion. It worked great until we started testing across a wider network and suddenly pages started timing out as the latency of the connections and the time required to perform calculations on remote machines resulted in timed out requests to the soap services. It turned out that we needed to change the architecture to spin requests out onto their own threads and cache the returned data so it could be updated progressively in the background rather than performing calculations on a request by request basis. The details of that scenario are not too important - indeed it's not a great example as it was quite forseeable and people who have written a lot of apps of this type for this type of environment might have anticipated it - except that it illustrates a way that one can start with a simple premise and model and suddenly have an escalation of complexity well into the development of the project. What strategies do you have for dealing with these types of functional changes whose need arises - often as a result of environmental factors rather than specification change - later on in the development process or as a result of testing? How do you balance between avoiding the premature optimisation/ YAGNI/ overengineering risks of designing a solution that mitigates against possible but not necessarily probable issues as opposed to developing a simpler and easier solution that is likely to be as effective but doesn't incorporate preparedness for every possible eventuality?

    Read the article

  • Algorithm to reduce calls to mapping API

    - by aidan
    A random distribution of points lies on a map. This data lies behind an API, and I want to grab the complete set of points within a given bounding box. I can query the API with the bounding box and the API will return the set of points that fall within that box. The problem is that the API will limit the result set to 10 items, with no pagination and no indication if there are more points that have been omitted. So I made a recursive algorithm that takes a bounding box and requests the points that lie within it. If the result set is exactly 10 items, then I split the bounding box into four quadrants and recurse. It works fine but my question is this: if want to minimize the number of API calls, what is the optimal way to split the bounding box? Splitting it into quadrants was just an arbitrary decision. When there are a lot of points on the map, I have to drill down many levels before I start getting meaningful results. So I imagine it might be faster to split the box into, say, 9, 16, or more sections. But if I do that, then I eventually get to a point where a lot of requests are returning 0 results which isn't so efficient. Also, does the size of the limit on the results set affect the answer? (This is all assuming that I have no prior knowledge of nominal point density in the bounding box)

    Read the article

  • Trim on encrypted SSD--Urandom first?

    - by cb474
    My understanding (I'm not sure I'm getting this all right) is that if one uses Trim on an encrypted SSD, it defeats some of the security benefits, because the drive will write zeros to empty space (as files are deleted). See: http://www.askubuntu.com/questions/115823/trim-on-an-encrypted-ssd And: http://asalor.blogspot.com/2011/08/trim-dm-crypt-problems.html My question is: From the perspective of the performance of the SSD and the functioning of Trim, would it therefore be better to simply zero out the SSD, before setting up an encrypted system, rather than writing random data to the drive, with urandom, as one usually does? Would this basically leave one with the same level of security anyway? And more importantly, would it better enable the Trim functionality to work as intended, with the encrypted SSD?

    Read the article

  • Standard Network Tiers in a Distributed N-Tier System

    Distributed N-Tier client/server architecture allows for segments of an application to be broken up and distributed across multiple locations on a network.  Listed below are standard tiers in a Distributed N-Tier System. End-User Client Tier The End-User Client is responsible for sending and receiving requests from web servers and other applications servers and translating the responses so that the End-User can interpret the data effectively. The primary roles for this tier are to communicate with servers and translate server responses back to the end-user to interpret. Business-Specific Functions Validate Data Display Data Send Data to Webserver Web Server Tier The Web server tier processes new requests for information coming in from the HTTP and HTTPS ports. This primarily handles the generation of user interfaces and calls the application server when needed to access Data and business logic when needed. Business-specific functions Send Data to application server Format Data for Display Validate Data Application Server Tier The application server stores and executes predefined business logic that is applied to various pieces of data as the business determines. The processed data is then returned back to the Webserver. Additionally, this server directly calls the database to obtain and store any data used by the system Business-Specific Functions Validate Data Process Data Send Data to Database Server Database Server Tier The Database Server is responsible for storing and returning all data need by the calling applications. The primary role for this this server is storage. Data is stored as needed and can be recalled at any point later in time. Business-Specific Functions Insert Data Delete Data Return Data to Application Server

    Read the article

  • Is realtime validation of username good or bad?

    - by iamserious
    I have a simple form for the user to sign up to my site; with email, username and password fields. We are now trying to implement an ajax validation so the user doesn't have to post the form to find out if the username is already taken. I can do this either on keyup event or on text blur event. My question is, which of these is really the best way to do? Keyup From the user POV, it would be good if the validation is done as and when they are typing, (on key up event) - of course, I am waiting for half a second to see if the user stops typing before firing off the request, and user can make any adjustments immediately. But this means I am sending way more requests than if I validated the username on Blur event. Blur The number of requests will be much lower when the validation is done on blur event, But this means the user has to actually go away from the textbox, look at the validation result, and if necessary go back to it to make any changes and repeat the whole process until he gets it right. I had a quick look at google, tumblr, twitter and no one actually does username validations on keyup events, (heck, tubmlr waits for the form to be posted) but I can swear I have seen keyup validations in a lot of places too. So, coming back to the question, will keyup validations be too many for server, is it an unnecessary overhead? or is it worth taking these hits to give user a better experience? ps: all my regex validations etc are already done on javascript and only when it passes all these other criteria does it send a request to server to check if a username already exists. (And the server is doing a select count(1) from user where username = '' - nothing substantial, but still enough to occupy some resource) pps: I'm on asp.net, MS SQL stack., if that matters.

    Read the article

  • How to prevent duplicate data access methods that retrieve similar data?

    - by Ronald Wildenberg
    In almost every project I work on with a team, the same problem seems to creep in. Someone writes UI code that needs data and writes a data access method: AssetDto GetAssetById(int assetId) A week later someone else is working on another part of the application and also needs an AssetDto but now including 'approvers' and writes the following: AssetDto GetAssetWithApproversById(int assetId) A month later someone needs an asset but now including the 'questions' (or the 'owners' or the 'running requests', etc): AssetDto GetAssetWithQuestionsById(int assetId) AssetDto GetAssetWithOwnersById(int assetId) AssetDto GetAssetWithRunningRequestsById(int assetId) And it gets even worse when methods like GetAssetWithOwnerAndQuestionsById start to appear. You see the pattern that emerges: an object is attached to a large object graph and you need different parts of this graph in different locations. Of course, I'd like to prevent having a large number of methods that do almost the same. Is it simply a matter of team discipline or is there some pattern I can use to prevent this? In some cases it might make sense to have separate methods, i.e. getting an asset with running requests may be expensive so I do not want to include these all the time. How to handle such cases?

    Read the article

  • The Future of the Database Begins Soon: Oracle Database In-Memory launch, 2014. június 10-ikén

    - by user645740
    Az Oracle adatbázis-kezelo történetében forradalmi újdonságot várunk. A Database In Memory-ról az OpenWorld-ön beszélt eloször nyilvánosan Larry Ellison. A launch webes eloadás 2014. június 10-én lesz, lehet rá regisztrálni: June 10: Oracle CEO Larry Ellison Live on the Future of Database Performance http://www.oracle.com/us/dm/sev100306382-ww-ww-lw-wi1-ev-2202435.html 10:00 a.m. PT – 11:30 a.m. PT, azaz számunkra 19:00-20:30 CET között. Az Oracle Database In-Memory valós idoben villámgyors lekérdezéseket hajt végre, nagyságrendekkel felgyorsíthatja a lekérdezéseket, és a tranzakciók is gyorsabbak lesznek, mindez az alkalmazások megváltoztatása nélkül! Oracle Database In-Memory: Powering the Real-Time Enterprise Nézze meg Ön is a launch eseményt!

    Read the article

  • How to Deal with an out of touch "Project manager"

    - by Joe
    This "manager" is 70+ yrs old and a math genius. We were tasked with creating a web application. He loves SQL and stored procedures. He first created this in MS access. For the web app I had to take his DB migrate to SQL server. His first thought was to have a master stored procedure with a WAITFOR Handling requests from users. I eventually talked him out of that and use asp.net mvc. Then eventually use the asp.net membership. Now the web app is a mostly handles requests from the pages that is passed to stored procedures. It is all stored procedure driven. The business logic as well. Now we are having an one open DB connection per user logged in plus 1. I use linq to sql to check 2 tables and return the values thats it period. So 25 users is a load. He complains why my code is bad cause his test driver stored procedure simulates over 100 users with no issue. What are the best arguments for not having the business logic not all in stored procedures?? How should I deal with this?? I am giving an abbreviated story of course. He is a genius part owner of the company all the other owners trust him because he is a genius. and quoting -"He gets things done. old school".

    Read the article

  • Need to Determine the Engine Status?

    - by user702295
    If you need to establish the status of the engine, begin with this SQL: select status, engine, engine_version,fore_column_name from dm.forecast_history The status of an engine run is stored in the FORECAST_HISTORY table, in the “status” field.  We can also find in that table the FORE_COLUMN_NAME field. This field includes the name of the column in SALES_DATA in which the relevant forecast is stored. Here are the possible statuses: -1, -2 : The engine failed in the initialization phase.  Which means, before the engine manager created the engines.  0 : The engine stopped in the optimization phase.  Which means, after the engines were created.  1: The engine finished the run successfully.  2: Forecast was never calculated for the relevant column that is mentioned in FORE_COLUMN_NAME.  

    Read the article

  • Will new Twitter API 1.1 allow hashtag/tweet/trend queries without any authentication, i.e. for a client that does not use an user's account at all?

    - by P5music
    I see that, even not being logged in Twitter with an account, if I google hashtags or twitter accounts, twitter show them. I think it should be also possible to get those tweets programmatically but I do not know it for sure, so I ask for confirmation here, especially for the future with the new Twitter API resctrictions. I mean, will it be possible to get tweets from hashtags or accounts without logging in an user account, and so not wanting to access the user settings, subscriptions, etc (because I do not need it), thus not having to respect any token limit? I found these API 1.1 faqs, have I to be concerned? Will an application have to request user authorization just to make public API calls? When API v1.1 is released, user authorization (and access tokens) are required for all API 1.1 requests. In the weeks following release, some methods will require only application-based authentication for certain "userless" contexts. Will an application have to request user authorization just to make public API calls? When API v1.1 is released, user authorization (and access tokens) are required for all API 1.1 requests. In the weeks following release, some methods will require only application-based authentication for certain "userless" contexts. Will the Search API require authentication? The Search API is now part of the official REST API in version 1.1. In addition to serving results in a format consistent with other Tweet resources, usage will also require authentication.

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >