Search Results

Search found 24382 results on 976 pages for 'tutor process procedure f'.

Page 386/976 | < Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >

  • Move SQL Server transaction log to another disk

    - by Jim Lahman
    When restoring a database backup, by default, SQL Server places the database files in the master database file directory.  In this example, that location is in L:\MSSQL10.CHTL\MSSQL\DATA as shown by the issuance of sp_helpfile   Hence, the restored files for the database CHTL_L2_DB are in the same directory     Per SQL Server best practices, the log file should be on its own disk drive so that the database and log file can operate in a sequential manner and perform optimally. The steps to move the log file is as follows: Record the location of the database files and the transaction log files Note the future destination of the transaction log file Get exclusive access to the database Detach from the database Move the log file to the new location Attach to the database Verify new location of transaction log Record the location of the database file To view the current location of the database files, use the system stored procedure, sp_helpfile 1: use chtl_l2_db 2: go 3:   4: sp_helpfile 5: go   Note the future destination of the transaction log file The future destination of the transaction log file will be located in K:\MSSQLLog   Get exclusive access to the database To get exclusive access to the database, alter the database access to single_user.  If users are still connected to the database, remove them by using with rollback immediate option.  Note:  If you had a pane connected to the database when the it is placed into single_user mode, then you will be presented with a reconnection dialog box. 1: alter database chtl_l2_db 2: set single_user with rollback immediate 3: go Detach from the database   Now detach from the database so that we can use windows explorer to move the transaction log file 1: use master 2: go 3:   4: sp_detach_db 'chtl_l2_db' 5: go   After copying the transaction log file re-attach to the database 1: use master 2: go 3:   4: sp_attach_db 'chtl_l2_db', 5: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB.MDF', 6: 'K:\MSSQLLog\CHTL_L2_DB_4.LDF', 7: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_1.NDF', 8: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_2.NDF', 9: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_3.NDF' 10: GO

    Read the article

  • Ubuntu 11.10 not starting in Graphical mode?

    - by iammilind
    I am in deep trouble. I am using Ubuntu 11.10 in dual boot mode with XP. Originally my touch pad was not working, (sometimes). To fix that, I installed something. After reboot my Ubuntu is not booting up!! I have several software pkgs already installed in past several weeks so reinstall of OS is my last resort. Can someone help me to get it rebooted in graphical mode? I had followed the procedure mentioned in this thread as well. With that link, somehow I am able to restart in text mode. But no luck after that. I am not being able to go back on graphical mode. The important outputs of lspci are following: 00.00.0 Host bridge: Intel Corporation Mobile PM965/GM965/GL960 Memory Controller Hub (rev 0c) 00:02.0 VGA compatible controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (primary) (rev 0c) 00:02.0 Display controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (primary) (rev 0c) I am attaching few snapshots, for more details on hardware.

    Read the article

  • How to sell logistical procedures that require less time to perform but more finesse?

    - by foampile
    I am working with a group where part of the responsibilities is managing a certain set of configuration files which, of course, have the same skeleton/structure across different environments but different values (like server, user, this setting, that setting etc.). Pretty classic scenario... The problem is that everyone just goes and modifies final, environment-specific files and basically repeats the work for every environment. Personally, I am offended to have to peform repeatable, mundane tasks in this day and age when we have technologies to automate it all. So I devised a very simple procedure of abstracting the files into templates, stubbing env-specific values with parameters and then wrote a simple Perl script that, given a template and an environment matrix with env-specific values for each param, produces the final file. So this is nothing special, cutting-edge or revolutionary -- I am pretty sure that 20 years ago efficient places did their CM like that. However, that requires that changes are made at the template level and then distributed across different environments using the script and not making changes in the final environment-specific files. This is where I am encountering resentment as they feel "comfortable" doing it their old, manual, repeated labor way. Personally, I don't have a problem with them working hard rather than smart but the problem is when I have to build on top of someone else's changes, I have to merge their changes into my template from a specific file, which takes time and is grueling. So my question is how to go about selling my method, which makes it so much faster in an environment that is resentful to change and where most things have to be done at the level of the least competent team member?

    Read the article

  • How can I best implement 'cache until further notice' with memcache in multiple tiers?

    - by ajreal
    the term "client" used here is not referring to client's browser, but client server Before cache workflow 1. client make a HTTP request --> 2. server process --> 3. store parsed results into memcache for next use (cache indefinitely) --> 4. return results to client --> 5. client get the result, store into client's local memcache with TTL After cache workflow 1. another client make a HTTP request --> 2. memcache found return memcache results to client --> 3. client get the result, store into client's local memcache with TTL TTL = time to live Is possible for me to know when the data was updated, and to expire relevant memcache(s) accordingly. However, the pitfalls on client site cache TTL Any data update before the TTL is not pick-up by client memcache. In reverse manner, where there is no update, client memcache still expire after the TTL First request (or concurrent requests) after cache TTL will get throttle as it need to repeat the "Before cache workflow" In the event where client require several HTTP requests on a single web page, it could be very bad in performance. Ideal solution should be client to cache indefinitely until further notice. Here are the three proposals about futher notice Proposal 1 : Make use on HTTP header (current implementation) 1. client sent HTTP request last modified time header 2. server check if last data modified time=last cache time return status 304 3. client based on header to decide further processing GOOD? ---- - save some parsing for client - lesser data transfer BAD? ---- - fire a HTTP request is still slow - server end still need to process lots of requests Proposal 2 : Consistently issue a HTTP request to check all data group last modified time 1. client fire a HTTP request 2. server to return last modified time for all data group 3. client compare local last cache time with the result 4. if data group last cache time < server last modified time then request again for that data group only GOOD? ---- - only fetch what is no up-to-date - less requests for server BAD? ---- - every web page require a HTTP request Proposal 3 : Tell client when new data is available (Push) 1. when server end notice there is a change on a data group 2. notify clients on the changes 3. help clients to fetch again data 4. then reset client local memcache after data is parsed GOOD? ---- - let the cache act/behave like a true cache BAD? ---- - encourage race condition My preference is on proposal 3, and something like Gearman could be ideal Where there is a change, Gearman server to sent the task to multiple clients (workers). Am I crazy? (I know my first question is a bit crazy)

    Read the article

  • Help with dual booting Windows 8.1 Professional and Ubuntu 13.10

    - by user1292548
    I recently installed a clean version of Windows 8.1 Professional on my Lenovo Y500 (with Samsung 256GB 840 Pro SSD). I have Windows all set up and running normally. I am trying to dual boot Windows 8.1 and Ubuntu 13.10, but the installation procedure don't allow me to either "Install alongside..." or shows my SSD partitions correctly when I chose the "Something Else" option. I have created a 25GB partition of free space in the Windows disk manager, but on the installation screen on Ubuntu, it shows the whole drive as a free space. I have tried installing with a burned .ISO disk and a bootable USB, the results are the same for both. Windows Disk Management screen: http://imageshack.us/a/img855/9504/59zu.jpg The Ubuntu installation screen: http://imageshack.us/a/img62/2712/9g6i.jpg I've ran into this problem before when trying to dual boot Ubuntu and Windows 7 Professional a month ago. But I gave up and never resolved the issue. --EDIT-- I tried what Eero Aaltonen suggested, and this is my result: ubuntu@ubuntu:~$ sudo parted /dev/sda print Warning: /dev/sda contains GPT signatures, indicating that it has a GPT table. However, it does not have a valid fake msdos partition table, as it should. Perhaps it was corrupted -- possibly by a program that doesn't understand GPT partition tables. Or perhaps you deleted the GPT table, and are now using an msdos partition table. Is this a GPT partition table? Yes/No? yes Model: ATA Samsung SSD 840 (scsi) Disk /dev/sda: 256GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags ubuntu@ubuntu:~$

    Read the article

  • Asynchrony in C# 5 (Part II)

    - by javarg
    This article is a continuation of the series of asynchronous features included in the new Async CTP preview for next versions of C# and VB. Check out Part I for more information. So, let’s continue with TPL Dataflow: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part II: TPL Dataflow Definition (by quote of Async CTP doc): “TPL Dataflow (TDF) is a new .NET library for building concurrent applications. It promotes actor/agent-oriented designs through primitives for in-process message passing, dataflow, and pipelining. TDF builds upon the APIs and scheduling infrastructure provided by the Task Parallel Library (TPL) in .NET 4, and integrates with the language support for asynchrony provided by C#, Visual Basic, and F#.” This means: data manipulation processed asynchronously. “TPL Dataflow is focused on providing building blocks for message passing and parallelizing CPU- and I/O-intensive applications”. Data manipulation is another hot area when designing asynchronous and parallel applications: how do you sync data access in a parallel environment? how do you avoid concurrency issues? how do you notify when data is available? how do you control how much data is waiting to be consumed? etc.  Dataflow Blocks TDF provides data and action processing blocks. Imagine having preconfigured data processing pipelines to choose from, depending on the type of behavior you want. The most basic block is the BufferBlock<T>, which provides an storage for some kind of data (instances of <T>). So, let’s review data processing blocks available. Blocks a categorized into three groups: Buffering Blocks Executor Blocks Joining Blocks Think of them as electronic circuitry components :).. 1. BufferBlock<T>: it is a FIFO (First in First Out) queue. You can Post data to it and then Receive it synchronously or asynchronously. It synchronizes data consumption for only one receiver at a time (you can have many receivers but only one will actually process it). 2. BroadcastBlock<T>: same FIFO queue for messages (instances of <T>) but link the receiving event to all consumers (it makes the data available for consumption to N number of consumers). The developer can provide a function to make a copy of the data if necessary. 3. WriteOnceBlock<T>: it stores only one value and once it’s been set, it can never be replaced or overwritten again (immutable after being set). As with BroadcastBlock<T>, all consumers can obtain a copy of the value. 4. ActionBlock<TInput>: this executor block allows us to define an operation to be executed when posting data to the queue. Thus, we must pass in a delegate/lambda when creating the block. Posting data will result in an execution of the delegate for each data in the queue. You could also specify how many parallel executions to allow (degree of parallelism). 5. TransformBlock<TInput, TOutput>: this is an executor block designed to transform each input, that is way it defines an output parameter. It ensures messages are processed and delivered in order. 6. TransformManyBlock<TInput, TOutput>: similar to TransformBlock but produces one or more outputs from each input. 7. BatchBlock<T>: combines N single items into one batch item (it buffers and batches inputs). 8. JoinBlock<T1, T2, …>: it generates tuples from all inputs (it aggregates inputs). Inputs could be of any type you want (T1, T2, etc.). 9. BatchJoinBlock<T1, T2, …>: aggregates tuples of collections. It generates collections for each type of input and then creates a tuple to contain each collection (Tuple<IList<T1>, IList<T2>>). Next time I will show some examples of usage for each TDF block. * Images taken from Microsoft’s Async CTP documentation.

    Read the article

  • Dreaded SQLs

    - by lavanyadeepak
    Dreaded SQLs We used to think that a SQL statement without a where clause is only dangerous right since running that on a server TSQL is just going to impact the entire table like waving the magic wand. For that reason we should cultivate the habit first to write the statement as select and then to modify the select portion as update. Within the T-SQL Window, I would normally prefer the following first: select * from employee where empid in (4,5) and then once I am satisfied with the results, I would go ahead with the following change: --select * delete from employee where empid in (4,5) Today I just discovered another coding horror. This would typically be applicable in a stored procedure and with respect to variable nomenclature. It is always desirable to have a suitable nomenclature for parameters distinct from the column names and internal variables. This would help quicker debugging of the stored procedures besides enhancing the readability. Else in a quick bout of enthusiasm a statement like   if (@CustomerID = @CustomerID) [when the latter is intended to denote the column name and there is a superflous @ prepended], zeroing in on the problem would be little tricky. Had there been a still powerful nomenclature rules then debugging would have been more straight-forward and simpler right?

    Read the article

  • Why would Java app make RPC call to itself?

    - by amphibient
    I am working with a multithreaded homegrown multi-module app in my new job. We use the the Thrift protocol to communicate RPC calls between different stand-alone applications in a distributed system. One of them listens on multiple ports and I just noticed that it actually makes an RPC call to itself from one thread invoked from one socket it listens to (web service call) to another port within the same app. I verified that it could accomplish the same thing if it just went and directly called the method that the remote procedure ultimately invokes as it is all within the same application, same JVM. To make it even more mysterious, the call is completely synchronous, i.e. no callbacks involved. The first thread totally sits and waits until it makes a call across the wire to itself and comes back. Now, I am perplexed why anybody would do it this way. It seems like calling somebody on the phone that sits in the same room as you do. Can anybody provide an explanation why the developer before me would come up with the above mentioned model? Maybe there is a reason and I am missing something.

    Read the article

  • Web Crawler for Learnign Topics on Wikipedia

    - by Chris Okyen
    When I want to learn a vast topic on wikipedia, I don't know where to start. For instance say I want to learn about Binary Stars, I then have to know other things linked on that pages and linked pages on all the linked pages and so on for the specified number of levels. I want to write a web crawler like HTTracker or something similiar, that will display a heiarchy of the links on a certain page and the links on those linked pages.I wish to use as much prewritten code as possible. Here is an example: Pretending we are bending the rules by grabing links from only the first sentence of each pages The example archives and "processes" two levels deep The page is Ternary operation The First Level In mathematics a ternary operation is an N-ary operation The Second Level Under Mathmatics: Mathematics (from Greek µ???µa máthema, “knowledge, study, learning”) is the abstract study of topics encompassing quantity, structure, space, change and others; it has no generally accepted definition. Under N-ary In logic,mathematics, and computer science, the arity i/'ær?ti/ of a function or operation is the number of arguments or operands that the function takes Under Operation In its simplest meaning in mathematics and logic, an operation is an action or procedure which produces a new value from one or more input values ------------------------------------------------------------------------- I need some way to determine what oder to approach all these wiki pages to learn the concept ( in this case ternary operations )... Following along with this exmpakle, one way to show the path to read would a printout flowout like so: This shows that the first sentence of the Mathematics page doesn't link to the first sentence of pages linked on ternary page two levels deep. (Please tell me how I should explain this ) --- In otherwords, the child node of the top pages first sentence, ternary_operation, does not have any child nodes that reference the children of the top pages other children nodes- N-ary and operation. Thus it is safe to read this first. Since N-ary has a link to operations we shoudl read the operation page second and finally read the N-ary page last. Again, I wish to use as much prewritten code as possible, and was wondering what language to use and what would be the simpliest way to go about doing this if there isn't already somethign out there? Thank You!

    Read the article

  • Class design issue

    - by user2865206
    I'm new to OOP and a lot of times I become stumped in situations similar to this example: Task: Generate an XML document that contains information about a person. Assume the information is readily available in a database. Here is an example of the structure: <Person> <Name>John Doe</Name> <Age>21</Age> <Address> <Street>100 Main St.</Street> <City>Sylvania</City> <State>OH</State> </Address> <Relatives> <Parents> <Mother> <Name>Jane Doe</Name> </Mother> <Father> <Name>John Doe Sr.</Name> </Father> </Parents> <Siblings> <Brother> <Name>Jeff Doe</Name> </Brother> <Brother> <Name>Steven Doe</Name> </Brother> </Siblings> </Relatives> </Person> Ok lets create a class for each tag (ie: Person, Name, Age, Address) Lets assume each class is only responsible for itself and the elements directly contained Each class will know (have defined by default) the classes that are directly contained within them Each class will have a process() function that will add itself and its childeren to the XML document we are creating When a child is drawn, as in the previous line, we will have them call process() as well Now we are in a recursive loop where each object draws their childeren until all are drawn But what if only some of the tags need to be drawn, and the rest are optional? Some are optional based on if the data exists (if we have it, we must draw it), and some are optional based on the preferences of the user generating the document How do we make sure each object has the data it needs to draw itself and it's childeren? We can pass down a massive array through every object, but that seems shitty doesnt it? We could have each object query the database for it, but thats a lot of queries, and how does it know what it's query is? What if we want to get rid of a tag later? There is no way to reference them. I've been thinking about this for 20 hours now. I feel like I am misunderstanding a design principle or am just approaching this all wrong. How would you go about programming something like this? I suppose this problem could apply to any senario where there are classes that create other classes, but the classes created need information to run. How do I get the information to them in a way that doesn't seem fucky? Thanks for all of your time, this has been kicking my ass.

    Read the article

  • When things go awry

    - by Phil Factor
    The moment the Entrepreneur opened his mouth on prime-time national TV, spelled out the URL and waxed big on how exciting ‘his’ new website was, I knew I was in for a busy night. I’d designed and built it. All at once, half a million people tried to log into the website. Although all my stress-testing paid off, I have to admit that the network locked up tight long before there was any danger of a database or website problem. Soon afterwards, the Entrepreneur and the Big Boss were there in the autopsy meeting. We picked through all our systems in detail to see how they’d borne the unexpected strain. Mercifully, in view of the sour mood of the Big Boss, it turned out that the only thing we could have done better was buy a bigger pipe to and from the internet. We’d specified that ‘big pipe’ when designing the system. The Big Boss had then railed at the cost and so we’d subsequently compromised. I felt that my design decisions were vindicated. The Big Boss brooded for a while. Then he made the significant comment: “What really ****** me off is the fact that, for ten minutes, we couldn’t take people’s money.” At that point I stopped feeling smug. Had the internet connection been better, the system would have reached its limit and failed rather precipitously, and that wasn’t what he wanted. Then it occurred to me that what had gummed up the connection was all those images on the site, that had made it so impressive for the visitors. If there had been a way to automatically pare down the site to the bare essentials under stress… Hmm. I began to consider disaster-recovery in the broadest sense – maintaining a service in spite of unusual or unexpected events. What he said makes a lot of sense: sacrifice whatever isn’t essential to keep the core service running when we approach the capacity limits. Maybe in IT we should borrow (or revive) the business concept of the ‘Skeleton service’, maintaining only the priority parts under stress, using a process that is well-prepared and carefully rehearsed. How might this work? Whatever the event we have to prepare for, it is all about understanding the priorities; knowing what one can dispense with when the going gets tough. In the event of database disaster, it’s much faster to deploy a skeletal system with only the essential data than to restore the entire system, though there would have to be a reconciliation process to update the revived database retrospectively, once the emergency was over. It isn’t just the database that could be designed for resilience. One could prepare for unusually high traffic in a website by designing a system that degraded gradually to a ‘skeletal’ site, one that maintained the commercial essentials without fat images, JavaScript libraries and razzmatazz. This is all what the Big Boss scathingly called ‘a mere technicality’. It seems to me that what is needed first is a culture of application and database design which acknowledges that we live in a very imperfect world, and react accordingly when things go awry.

    Read the article

  • Is Wordpress more appropriate than Magento/Opencart for site like this?

    - by Alex
    The premise of the site is that a user pays a small fee to advertise an item that they want to sell. Therefore the user is responsible for adding the "products", not the administrator. The product upload will create a product page for that item. This is a rather common framework that I'm sure you're familiar with. My initial thought was that it would be best suited using Magento - mainly because it needs to accept payments - and the products will grow to form a catalog of categorized products. However - there is no concept of a shopping cart. A buyer does not buy the item online, or go to a checkout. They simply look at the product, and contact the seller if they like it. The buyer and seller then take it from there. For this reason, I then begin to suspect that Magento is perhaps too overkill, or just simply not the right CMS if there is on checkout procedure (other than the uploader making a payment) So then I begin to think Wordpress....Hmmm Feature requirements: User's can add content via a form process User's can be directed to a payment gateway For each product listing - a series of photographs shall be displayed, in thumbnail form Zoom capabilities/rotate on the images would be a welcome feature In short - e-commerce CMS, or something more simple?

    Read the article

  • WIN7 and Ubuntu lost after Installing ubuntu 12.04 and win7 dual system ,I have no OS on my laptop now

    - by abos
    Here is the procedure: In the morning I installed ubuntu using a USB directly without config any thing to my win7 system. After install complete, ubuntu installation software tell me to reboot.And everything is just find. While rebooting, there is NO UBUNTU system for me to select,and my laptop go straight to log in using WIN7. NO ubuntu shows on WIN7's configuration(Default System). Log in ubuntu using usb(try ubuntu without installation), I can find ubuntu's filesystem was already there. Formatting the disk on WIN7's disk management, rearranging them to other disk.Still having no trouble with WIN7. In the afternoon try a few times of installation and uninstallation of ubuntu. still shows no sign of selecting ubuntu system. In the evening another trial while installing ubuntu with the third option of: installing ubuntu alongside with INW7, erase win7 and install ubuntu. somethingelse --- my check failed with configuartion for what comes out with the 'something else' option,reboot. And I have no system now with some cmd tips say: Reboot and Select proper Boot Device or Insert Boot Media in selected Boot device and press a key. Files those on win7's orginal file system and Ubuntu filesystem can still be found when I 'try ubuntu without installation'. 5.But I just got no OS when I reboot my laptop normally.

    Read the article

  • What Counts For a DBA – Depth

    - by Louis Davidson
    SQL Server offers very simple interfaces to many of its features. Most people could open up SSMS, connect to a server, write a simple query and see the results. Even several of the core DBA tasks are deceptively straightforward. It doesn’t take a rocket scientist to perform a basic database backup or run a trace (even using the newfangled Extended Events!). However, appearances can be deceptive, and often times it is really important that a DBA understands not just the basics of how to perform a task, but why we do a task, and how that task works. As an analogy, consider a child walking into a darkened room. Most would know that they need to turn on the light, and how to do it, so they flick the switch. But what happens if light fails to shine forth. Most would immediately tell you that you need to consider changing the light bulb. So you hop in the car and take them to the local home store and instruct them to buy a replacement. Confronted with a 40 foot display of light bulbs, how will they decide which of the hundreds of types of bulbs, of different types, fittings, shapes, colors, power and efficiency ratings, is the right choice? Obviously the main lesson the child is going to learn this day is how to use their cell phone as a flashlight so they don’t have to ask for help the next time. Likewise, when the metaphorical toddlers who use your database server have issues, they will instinctively know something is wrong, and may even have some idea what caused it, but will have no depth of knowledge to figure out the right solution. That is where the DBA comes in and attempts to save the day. However, when one looks beneath the shiny UI, SQL Server has its own “40 foot display of light bulbs”, in the form of the tremendous number of tools and the often-bewildering amount of information they can present to the DBA, to help us find issues. Unfortunately, resorting to guesswork, to trying different “bulbs” over and over, hoping to stumble on the answer. This is where the right depth of knowledge goes a long way. If we need to write a SELECT statement, then knowing the syntax and where to find the data is not enough. Knowledge of indexes and query plans is essential. Without it, we might hit on a query that “works”, but we are basically still a user, not a programmer, because we have no real control over our platform. Is that level of knowledge deep enough? Probably not, since knowledge of the underlying metadata and structures would be very useful in helping us make sense of any query plan. Understanding the structure of an index makes the “key lookup” operator not sound like what you do when someone tapes your car key to the ceiling. So is even this level of understanding deep enough? Do we need to understand the memory architecture used to process the query? It might be a comforting level of knowledge, and will doubtless come in handy at some point, but is not strictly necessary in most cases. Beyond that lies (more or less) full knowledge of SQL language and the intricacies of every step the SQL Server engine takes to process our query. My personal theory is that, as a professional, our knowledge of a given task should extend, at a minimum, one level deeper than is strictly necessary to perform the task. Anything deeper can be left to the ridiculously smart, or obsessive, or both. As an example. tasked with storing an integer value between 0 and 99999999, it’s essential that I know that choosing an Integer over Decimal(8,0) will likely offer performance benefits. It is then useful that I also understand the value of adding a CHECK constraint, to make sure the values are valid to the desired range; and comforting that I know a little about the underlying processors, registers and computer math. Anything further, I leave to the likes of Joe Chang, whose recent blog post on the topic offers depth by the bucketful!  

    Read the article

  • When will EBS 12.2 be released?

    - by Steven Chan (Oracle Development)
    The most frequently asked question at OpenWorld this year was, "When will EBS 12.2 be released?" Sadly, Oracle's communication policies prohibit us from speculating about release dates for unreleased software. We are not permitted to give estimates, rough timelines, guesses, or anything else that remotely resembles specific guidance on release dates. You can monitor My Oracle Support and this blog for updates on EBS 12.2.  I'll post them here as soon as they're available.  I'm embedding an old favourite from 2007 in its entirety here, since it applies equally to new releases as well as certifications. "Loose Lips Sink Ships" (March 20, 2007)If I were to sort emails in my inbox into groups, the biggest -- by far -- would be the one for emails that start with, "When will _____ be certified with the E-Business Suite?"  I answer these dutifully but know that my replies can sometimes be maddening, for two reasons:  technical uncertainty, and Oracle's rules for such communications. On the Spiral Model of CertificationsTechnology stack certifications tend to be highly iterative in nature.  As a result, statements about certification dates tend to be accurate only when made in hindsight.  Laypeople are horrified to hear this, but it's the ugly truth.  Uncertainty is simply inherent to the process.  I've become inured to it over the years, but it might come as a surprise to you that it can take many cycles to get fully-released software to work together.  Take this scenario: We test a particular combination of Component A and B. If we encounter a problem, say, with Component A, we log a bug. We receive a new version of Component A. The process iterates again. The reality is this: until a certification is completed and released, there's no accurate way of telling how many iterations are yet to come.  This is true regardless of the number of iterations that have already been completed.  Our Lips Are SealedGenerally, people understand that things are subject to change, so the second reason I can't say anything specific is actually much more important than the first.  "Loose lips might sink ships" was coined in World War II in an effort to remind people that careless talk can have serious consequences.  Curiously, this applies to Oracle's communications about upcoming features, configurations, and releases, too.  As a publicly traded company, we have very strict policies that prohibit us from linking specific releases to specific dates.  If you've ever listened to an earnings call with analysts, you'll often hear them asking, "Can you add a little more color to that statement?"  For certifications, color is usually the only thing that I have.  Sometimes I can provide a bit more information about the technical nature of the certification in question, such as expected footprints or version levels.  I can occasionally share technical issues that we've found, too, to convey the degree of risk or complexity involved in the certification.  Aside from that, there's little additional information about specific dates, date ranges, or even speculation about dates that I can provide... that is, without having one of those uncomfortable conversations with Oracle Legal.  So, as much as it pains me to do so, when it comes to dates, I'm always forced to conclude with a generic reply that blandly states one of the following: We're working on that certification right now That certification is in the pipeline but hasn't been started yet We don't have plans for that certification Don't Shoot the MessengerThankfully, I've developed a thick skin over the years -- which is a good thing, considering the colorful and energetic responses I've received over the years after answering these questions.  However, on behalf of my Oracle colleagues who are faced with these questions every day in the field, I urge you to remember that they're required to follow these same corporate rules about date disclosures.  It never hurts to ask, but don't be too disappointed if we can't provide you with a detailed answer.  The Go-Go's had it right, after all.  Related Articles Webcast Replay Available: Technical Preview of EBS 12.2 Online Patching

    Read the article

  • Get More Value From Your Oracle Premier Support Investment

    - by Get Proactive Customer Adoption Team
    Untitled Document The Return on Investment in Support Training I’m a typical software user. I’ve been using spreadsheets almost daily for the past 10 years or so. I know how to enter simple formulas, format cells, import files, and I can sort and filter. Sometimes I even use a pivot table. I never attended training. I learnt everything I know on the fly. Sometimes it was intuitive and easy, other times I had to spend minutes and even hours searching for a solution. Yet when I see what some other people can do with their spreadsheets, I know I’m utilizing maybe 15% of the functionality. Pity, one day I really have to sign up for training. Why haven’t I done it yet? Ah, you know, I’m a busy person, I have work to do. And if I need to use a feature that I am unfamiliar with, I’ll spend time on it only when I really need it. Now wait. When I recall how much time I spent trying to figure how things work compared to time I spent doing the productive work, I realize it was not insignificant. I’m unable to sum up all the time I spent ‘learning’ on the fly, but I’m sure it’s been days or even weeks. And after all this time, I’ve mastered 15% of its features. If only I had attended training years ago. That investment would have paid back 10 times! Working with My Oracle Support is no different. Our customers typically use simple search, create service requests, and download patches. They think they know how to use My Oracle Support. And they’re right. They know something but often they’re utilizing only a fragment of My Oracle Support’s potential. For the investment that has been made, using only a small subset of the capabilities offered in My Oracle Support leaves value on the table. There is much more available in My Oracle Support. Dozens of diagnostic tools and proactive health checks will keep verifying your Oracle environments against best practices that Oracle gathers every day thanks to our comprehensive knowledge management process. Automated patch recommendations will help prevent known issues, and upgrade planning and more is included in My Oracle Support. Why are you not utilizing all of these best practices, capabilities and tools? Is it because you don’t have time to invest 2-3 hours of your time to learn about the features? Simply because you think you can learn on the fly like I thought I could? Does learning on the fly how to properly use the Service Request escalation process when you already have critical issue sound like a good idea? My advice is: Invest your time now to learn how My Oracle Support can help you prevent issues on your systems. Learn how to find answers faster and resolve problems more efficiently. Understand how to properly complete a service request. Invest in Support training, offered at no additional cost to Oracle Premier Support customers. It will pay back quicker than you think. It will bring you more value than you think. Discover your advantage with Oracle Premier Support's Proactive Portfolio.

    Read the article

  • Conventions for search result scoring

    - by DeaconDesperado
    I assume this type of question is more on-topic here than on regular SO. I have been working on a search feature for my team's web application and have had a lot of success building a multithreaded, "divide and conquer" processing system to work through a large amount of fulltext. Our problem domain is pretty specific. Users of the app generate posts, and as a general rule, posts that are more recent are considered to be of greater relevance. Some of the data we are trying to extract from search is very specific (user's feelings about specific items or things) and we are using python nltk to do named-entity extraction to find interesting likely query terms. Essentially we look for descriptive adjective-noun pairs and generate a general picture of a user's expressed sentiment as a list of tokens. This search is intended as an internal tool for our team to draw out a local picture of sentiments like "soggy pizza." There's some machine learning in there too to do entity resolution on terms like "soggy" to all manner of adjectives expressing nastiness. My problem is I am at a loss for how to go about scoring these results. The text being searched is split up into tokens in a list, so my initial approach would be to normalize a float score between 0.0-1.0 generated off of how far into the list the terms appear and how often they are repeated (a later mention of the term being worth less, earlier more, greater frequency-greater score, etc.) A certain amount of weight could be given to the timestamp as well, though I am not certain how to calculate this. I am curious if anyone has had to solve a similar problem in a search relevance grading between appreciable metrics (frequency, term location/colocation, recency) and if there are and guidelines for how to weight each. I should mention as well that the final fallback procedure in the search is to pipe the query to Sphinx, which has its own scoring practices. Sphinx operates as the last resort in case our application specific processing can't find any eligible candidates.

    Read the article

  • Korea&rsquo;s Anti Abortion / Pro Life Movement

    - by Randy Walker
    The South Korean government is in dire straits.  The national birth rate continues to decline and as the population grows older, there aren’t enough children being born to support the country long term.  The social issues of post Korean War are coming back to haunt the empowered nation.  Being torn apart by the Korean War (nicknamed the forgotten war in America) and with a nation facing starvation, South Korea allowed hundreds of thousands of their children to be adopted abroad.  This has created a problem of epidemic proportions, essentially devaluing life in Korea and child rearing. In an effort to encourage birth rates, the government encouraged their workers to go home early and procreate by turning off the lights in buildings.  Something unknown to me, was the illegalization of abortion except in special cases. According to the this article, http://joongangdaily.joins.com it’s working.  Abortions are down and women are being encouraged to give birth.  However the flip side is illegal risky abortions are on the rise, with potential back alley abortions looming.  But with a nation facing it’s potential implosion, it has to continue it’s efforts to encourage mothers to give birth. Many of the issues that America has faced is in stark contrast to South Korea.  Abortion has been a generally accepted procedure for some time.  If you’ll recall, I mentioned South Korea devalued their children.  But the nation’s problems lie so much deeper.  Being an Asian nation, saving “face” is an important aspect of life.  And being an unwed mother disgraces the family.  Living as a single mother in South Korea is a difficult life.  Most married mothers stay at home to take care of the children.  Being a shunned single mother that has a hard time getting a job (because you are a single mother) and affording child care isn’t like life in America. If we in the states suddenly faced a birthrate crisis, what would the U.S. government do?

    Read the article

  • ubuntu 10.04: boot error for custom compiled kernel - gave up wating for root device

    - by atharva
    Hi, I have installed lucid on my Lenevo Laptop (Y 410 series , x86 platoform) and it is working fine. Now I have compiled kernel 2.6.37 from the downloaded from the kernel tree. I followed usual procedure of compileing kernel (make menuconfig,make. make modules etc). Then I created the initrd image using mkinitramfs and updated my grub using upadate grub command. Update-grub detects the initrd image of the compiled kernel. However when I boot from this kernel it gives me following error: Gave up waiting for root device. Common problems: -Boot args (cat /proc/cmdline) -Check rootdelay= (did the system wait long enough?) -Check root= (did the system wait for the right device?) -Missing modules (cat /proc/modules; ls /dev) ALERT! root=UUID=/... does not exist and then it falls onto initramfs prompt. I have tried following solutions discussed in different ubuntu forums: 1. disable uuid and point root=/dev/sda8 (sda8 is where my kernele image resides (both default kernel and compiled one) from /etc/default/grub 2. compile kernel using CONFIG_DEVTMPFS=y suggested here Still I am unable to boot from the compile kernel. Could someone please suggest me the solution ?

    Read the article

  • Wireless switch on Dell XT2 - strange behaviour of rfkill

    - by DyP
    I have an Dell Latitude XT2 using an Intel WLAN card (lspci lists it as "Intel Corporation Ultimate N WiFi Link 5300") running Lubuntu 12.04 with recent updates. The laptop has a hardware WLAN switch. I have problems activating the WLAN when booting with the hardware switch set to "off". The situation is a bit confusing, unfortunately. rfkill lists two WLAN devices (though lspci only shows the Intel one). This is the situation when booting with the hardware switch set to "Off": 0: dell-wifi: Wireless LAN Soft blocked: yes Hard blocked: yes 1: dell-bluetooth: Bluetooth Soft blocked: yes Hard blocked: yes 2: phy0: Wireless LAN Soft blocked: yes Hard blocked: yes From some tests, I conclude WLAN is only activated when both, the dell-wifi and phy0, are unblocked by soft- and hardware. But I can only unblock dell-wifi after the hardware switch is set to "on". Procedure right from boot with hardware switch set to "Off": Soft-unblocking phy0 works as expected. Could be done by start-up script. sudo rfkill unblock 0: nothing happens. Soft block of dell-wifi not removed. Set the hardware switch to "on": phy0 gets its hard block removed. Still no WLAN. sudo rfkill unblock 0: both the soft and hard lock of dell-wifi are removed. WLAN is now active and works. sudo rfkill block 0: only adds the soft block as expected. WLAN goes off again. So, in order to activate WLAN, I have to use the hardware switch and afterwards (manually) run a script - that's a bit inconvenient. Does someone know a better solution? Maybe a daemon could help that listens to rfkill events to unblock dell-wifi after I have set the hardware switch to "on"? (sounds like another workaround) When booting with the hardware switch set to "On", nothing is blocked neither hard nor soft.

    Read the article

  • Single-developer GIT workflow (moving from straightforward FTP)

    - by melat0nin
    I'm trying to decide whether moving to VCS is sensible for me. I am a single web developer in a small organisation (5 people). I'm thinking of VCS (Git) for these reasons: version control, offsite backup, centralised code repository (can access from home). At the moment I work on a live server generally. I FTP in, make my edits and save them, then reupload and refresh. The edits are usually to theme/plugin files for CMSes (e.g. concrete5 or Wordpress). This works well but provides no backup and no version control. I'm wondering how best to integrate VCS into this procedure. I would envisage setting up a Git server on the company's web server, but I'm not clear how to push changes out to client accounts (usually VPSes on the same server) - at the moment I simply log into SFTP with their details and make the changes directly. I'm also not sure what would sensibly represent a repository - would each client's website get their own one? Any insights or experience would be really helpful. I don't think I need the full power of Git by any means, but basic version control and de facto cloud access would be really useful.

    Read the article

  • Migrating VB6 to HTML5 is not a fiction - Customer success story

    - by Webgui
    All of you VB developers in the present or past would probably find it hard to believe that the old VB code can be migrated and modernized into the latest .NET based HTML5 without having to rewrite the application. But we have been working on such tools for the past couple of years and already have several real world applications that were fully 'transposed' from VB6. The solution is called Instant CloudMove and its main tool is called the TranspositionStudio. It is a unique solution that relies on the concept of transposition. Transposition comes from mathematics and music and refers to exchanging elements while everything else remains the same or moving an element as is from one environment to another. This means that we are taking the source code and put it in a modern technological environment with relatively few adjustments.The concept is based on a set of Mapping Expressions which are basically links between an element in the source environment and one in the target environment that has the same functionality. About 95% of the code is usually mapped out-of-the-box and the rest is handled with easy-to-use mapping tools designed for Visual Studio developers providing them with a familiar environment and concepts for completing the mapping and allowing them to extend and customize existing mapping expressions. The solution is also based on a circular workflow that enables developers to make any changes as required until the result is satisfying.As opposed to existing migration solutions that offer automation are usually a “black box” to the user, the transposition concept enables full visibility, flexibility and control over the code and process at all times allowing to also add/change functionalities or upgrade the UI within the process and tools.This is exactly the case with our customer’s aging VB6 PMS (Property Management System) which needed a technological update as well as a design refresh. The decision was to move the VB6 application which had about 1 million lines of code into the latest web technology. Since the application was initially written 13 years ago and had many upgrades since the code must be very patchy and includes unused sections. As a result, the company Mihshuv Group considered rewriting the entire application in Java since it already had the knowledge. Rewrite would allow starting with a clean slate and designing functionality, database architecture, UI without any constraints. On the other hand, rewrite entitles a long and detailed specification work as well as a thorough QA and this translates into a long project with high risk and costs.So the company looked for a migration solution as an alternative; the research lead to Gizmox and after examining the technology it was decided to perform a hybrid project which would include an automatic transposition of the core of the VB6 application (200,000 lines of code) while they redesigning the UI, adding new functionality, deleting unused code and rewriting about 140 reports with Crystal Reports will be done manually using Visual WebGui development tools.The migration part of the project was completed in 65 days by 3 developers from Mihshuv Group guided by Gizmox migration experts while the rewrite and UI upgrade tasks took about the same. So in only a few months period Mihshuv Group generated an up-to-date product, written in the latest Web technology with modern, friendly UI and improved functionality. Guest selection screen of the original VB6 PMS Guest selection screen on the new web–based PMS Compared to the initial plan to rewrite the entire application in Java, the hybrid migration/rewrite approach taken by Mihshuv Group using Gizmox technology proved as a great decision. In terms of time and cost there were substantial savings; from a project that was priced for at least a year (without taking into account the huge risk and uncertainty) it became a few months project only. More about this and other customer stories can be found here

    Read the article

  • alsa - sound issues on ubuntu 12.04

    - by tam_ubuuser
    i am having an sony E series laptop.i have an HDMI port .at this stage ,i have tested my sound card , which provides audio out on my laptop i.e i could hear songs .my laptop has two sound cards amd 5450 and an intel-hda(alsamixer shows that as s/pdif) . i decided to connect HDMI output to my new HD-TV.but, i could get only visuals on my TV,NO AUDIO OUTPUT ( HDMI cable works fine with win 7).my laptop has two sound cards.but i couldn't switch output to other card.( i don't know ,how to do that) i decided to update alsa. complied the following code in terminal. sudo apt-add-repository ppa:ubuntu-audio-dev/alsa-daily sudo apt-get update sudo apt-get install alsa-hda-dkms then,strangely no login sound, and no audio output on my laptop at all .then, started complied code from step1 sound troubleshooting procedure from offical ubuntu site.then, my speaker icon taskbar disappeared .obivously $aplay -l ,provided output as no soundcards detected . so , i implemented step 4 from that guide, it provides a output of all hardware devices in my laptop. *-multimedia UNCLAIMED description: Audio device product: Cedar HDMI Audio [Radeon HD 5400/6300 Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0.1 bus info: pci@0000:01:00.1 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi bus_master cap_list configuration: latency=0 resources: memory:f0040000-f0043fff *-multimedia UNCLAIMED description: Audio device product: 5 Series/3400 Series Chipset High Definition Audio vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 05 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:f5e00000-f5e03fff that command displayed output name of the two cards . but , still i have no positive output on $aplay -l. so therfore, i think alsa couldn't detect my sound cards . is there solution to this problem? it could be better,if alsa would channel output from multiple sound cards ? how should install and configure alsa such that detects HDMI cable as soon i connect to my HD tv? is it possible to alsa and pluseaudio 2.0 to co-exist, if so how?

    Read the article

  • Can you/should you develop components for ASP.NET MVC?

    - by Vilx-
    Following from the previous question I've started to wonder - is it possible to implement "Components" in ASP.NET MVC (latest version)? And should you? Let's clarify what I mean with a "component". With that I mean a "control" (aka "widget"), similar to those that ASP.NET webforms is built upon. A gridview might be a good example. In webforms I can place on my form a datasource component (one line of code), a gridview component (another line of code) and bind them together (specify an attribute on the gridview). In the codebehind file I fill the datasource with data (a few lines of DB-querying code), and I'm all set. At this point the gridview is a fully functional standalone component. I can open the form, and I'll see all the data. I can sort it by clicking on the column headers; it is split into several pages; I can drag the column headers around and rearrange columns; I can turn on "grouping" mode; etc. And I don't need to write another line of code for any of it. The gridview, as a component, already has all the code tucked away in its classes and assemblies. I just place it on the form, initialize it, and it Just Works. At some times (like sorting or navigation to a different page) it will also perform ajax callbacks to the server, but those too will be handled internally, with my code having no knowledge at all about it. And then there are also events that I can attach if I want to get notified when something happens. In MVC I cannot see a way of doing this cleanly. Sure, there are the partial views, but those only handle half of the problem - they render the initial HTML. Some more can be achieved with client-side Javascript (like column re-arranging), but when the grid needs to do an ajax callback (say, to fetch the next page of data), my code will have to get involved and process that request. At best I guess I can provide some helper methods to process it, but I'll have to write the code that calls them, and also provide a controller method with signature matching the arguments of that callback. I guess that I could make some hacks with global events or special routes or something, but that just seems... hackish. Unelegant. Perhaps this is not the MVC way? Although I've completed one project in it, I'm still far from being an MVC expert. But then what is? In the intranet application that we're building there are dozens upon dozens of such grids. Naturally I want them all to have a unified look & behavior, and I don't want to repeat the same code all over the place. So what's the "MVC" approach to this problem?

    Read the article

  • Design practice for securing data inside Azure SQL

    - by Sid
    Update: I'm looking for a specific design practice as we try to build-our-own database encryption. Azure SQL doesn't support many of the encryption features found in SQL Server (Table and Column encryption). We need to store some sensitive information that needs to be encrypted and we've rolled our own using AesCryptoServiceProvider to encrypt/decrypt data to/from the database. This solves the immediate issue (no cleartext in db) but poses other problems like Key rotation (we have to roll our own code for this, walking through the db converting old cipher text into new cipher text) metadata mapping of which tables and which columns are encrypted. This is simple when it's just couple of columns (send an email to all devs/document) but that quickly gets out of hand ... So, what is the best practice for doing application level encryption into a database that doesn't support encryption? In particular, what is a good design to solve the above two bullet points? If you had specific schema additions would love it if you could give details ("Have a NVARCHAR(max) column to store the cipher metadata as JSON" or a SQL script/commands). If someone would like to recommend a library, I'd be happy to stay away from "DIY" too. Before going too deep - I assume there isn't any way I can add encryption support to Azure by creating a stored procedure, right?

    Read the article

< Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >