Search Results

Search found 6436 results on 258 pages for 'el quick'.

Page 33/258 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • codeigniter form validation if failed send user back

    - by Momen M El Zalabany
    i have this controller(visits.php) that use 3 models to build page, then it pass 4 vars to a view. this view contain a form that submit to another page (add_user.php) my question is how can i send the user back to visits.php in case the for_validation is false ? do i have to reload visits.php allover again ?-waste of resourss- or i should just send him back in history with js or php header for example ? can u plz show me an example of how u best handle false result of form_validation ? Note that when i user header('Location: ' . $_SERVER['HTTP_REFERER']); i couldnt echo validation_error() :( thanks for help

    Read the article

  • Creating a simple JQuery tooltip?

    - by Mike El Panther
    Has anybody here got any experience in creating simple tooptips using JQuery? I have created a table: <tr id="mon_Section"> <td id="day_Title">Monday</td>` <td id="mon_Row" onmousemove="calculate_Time(event)"></td> </tr>` the function "calculate_Time" will be called and this gets the X position of the mouse cursor and from the value, uses IF and ELSE-IF statements to calculate the values of other variables i have created: var mon_Pos; var hour; var minute; var orig; var myxpos; function calculate_Time(event) { myxpos = event.pageX; myxpos = myxpos-194; if (myxpos<60) { orig = myxpos; minute = myxpos; hour = 07; } else if (myxpos>=60 && myxpos<120) { orig = myxpos; minute=myxpos-60; hour=08; } } How do i go about creating a stylized tooltip containing the Hour and Minute variables that i created, I need the tooltip to be updated everytime the X position of the cursor changes. I know that with myxpos variable does change whenever the mouse is moved because i tried it out using an alert(myxpos) and as expected, if the mouse moves, a new alert box popups with a new value. I just cant work out how to place that value in a tooltip?

    Read the article

  • Dealing with HTTP content in HTTPS pages

    - by El Yobo
    We have a site which is accessed entirely over HTTPS, but sometimes display external content which is HTTP (images from RSS feeds, mainly). The vast majority of our users are also stuck on IE6. I would both of the following Prevent the IE warning message about insecure content Present something useful to users in place of the images that they can't otherwise see; if there was some JS I could run to figure out which images haven't been loaded and replace them with an image of ours instead that would be great. I suspect that the first aim is simply not possible, but the second may be sufficient. A worst case scenario is that I parse the RSS feeds when we import them, grab the images store them locally so that the users can access them that way, but it seems like a lot of pain for reasonably little gain.

    Read the article

  • Firefox extension: How to inject javascript into page and run it?

    - by el griz
    I'm writing a Firefox extension to allow users to annotate any page with text and/or drawings and then save an image of the page including the annotations. Use cases would be clients reviewing web pages, adding feedback to the page, saving the image of this and emailing it back to the web developer or testers taking annotated screenshots of GUI bugs etc. I wrote the annotation/drawing functionality in javascript before developing the extension. This script adds a <canvas> element to the page to draw upon as well as a toolbar (in a <div>) that contains buttons (each <canvas> elements) for the different draw tools e.g. line, box, ellipse, text, etc. This works fine when manually included in a page. I now need a way for the extension to: Inject this script into any page including pages I don't control. This needs to occur when the user invokes the extension, which can be after the page has loaded. Once injected the init() function in this script that adds the canvas and toolbar elements etc. needs to be run somehow, but I can't determine how to call this from the extension. Note that once injected I don't need this script to interact with the extension (as the extension just takes a screenshot of the entire document (and removes the added page elements) when the user presses the save button in the extension chrome).

    Read the article

  • Linux based MS Office thumbnail generation

    - by El Yobo
    I've been taken on board to work on a PHP based web application. One part of the application generates thumbnail images for MS Office documents on demand, and it uses MS Office + the VeryPDF docprint utility to do this. Because of this one requirement, the system is running on Windows Server 2003 + IIS. I would prefer to have the system running on a Linux server, rather than MS, as I have far more experience in administering Linux systems than Windows and we have no other in-house technical staff. Does anyone know a way to handle the document conversion using native Linux software? I would love something PHP native, but am willing to look outside that if necessary. Thanks for your suggestions.

    Read the article

  • using Object input\ output Streams with files and array list

    - by soad el-hayek
    hi every one .. i'm an it student , and it's time to finish my final project in java , i've faced too many problems , this one i couldn't solve it and i'm really ubset ! :S my code is like this : in Admin class : public ArrayList cos_info = new ArrayList(); public ArrayList cas_info = new ArrayList(); public int cos_count = 0 ; public int cas_count = 0 ; void coustmer_acount() throws FileNotFoundException, IOException{ String add=null; do{ person p = new person() ; cos_info.add(cos_count, p); cos_count ++ ; add =JOptionPane.showInputDialog("Do you want to add more coustmer..\n'y'foryes ..\n 'n'for No .."); } while(add.charAt(0) == 'Y'||add.charAt(0)=='y'); writenew_cos(); // add_acounts(); } void writenew_cos() throws IOException{ ObjectOutputStream aa = new ObjectOutputStream(new FileOutputStream("coustmer.txt")); aa.writeObject(cos_info); JOptionPane.showMessageDialog(null,"Added to file done sucessfuly.."); aa.close(); } in Coustmer class : void read_cos() throws IOException, ClassNotFoundException{ person p1= null ; int array_count = 0; ObjectInputStream d = new ObjectInputStream(new FileInputStream("coustmer.txt")); JOptionPane.showMessageDialog(null,d.available() ); for(int i = 0;d.available() == 0;i++){ a.add(array_count,(ArrayList) d.readObject()); array_count++; JOptionPane.showMessageDialog(null,"Haaaaai :D" ); JOptionPane.showMessageDialog(null,array_count ); } d.close(); JOptionPane.showMessageDialog(null,array_count +"1111" ); for(int i = 0 ; i it just print JOptionPane.showMessageDialog(null,d.available() ); and having excep. here a.add(array_count,(ArrayList) d.readObject()); p.s : person object from my own class and it's Serializabled

    Read the article

  • using Object input\ output Streams with files and array list

    - by soad el-hayek
    hi every one .. i'm an it student , and it's time to finish my final project in java , i've faced too many problems , this one i couldn't solve it and i'm really ubset ! :S my code is like this : in Admin class : public ArrayList cos_info = new ArrayList(); public ArrayList cas_info = new ArrayList(); public int cos_count = 0 ; public int cas_count = 0 ; void coustmer_acount() throws FileNotFoundException, IOException{ String add=null; do{ person p = new person() ; cos_info.add(cos_count, p); cos_count ++ ; add =JOptionPane.showInputDialog("Do you want to add more coustmer..\n'y'foryes ..\n 'n'for No .."); } while(add.charAt(0) == 'Y'||add.charAt(0)=='y'); writenew_cos(); // add_acounts(); } void writenew_cos() throws IOException{ ObjectOutputStream aa = new ObjectOutputStream(new FileOutputStream("coustmer.txt")); aa.writeObject(cos_info); JOptionPane.showMessageDialog(null,"Added to file done sucessfuly.."); aa.close(); } in Coustmer class : void read_cos() throws IOException, ClassNotFoundException{ person p1= null ; int array_count = 0; ObjectInputStream d = new ObjectInputStream(new FileInputStrea ("coustmer.txt")); JOptionPane.showMessageDialog(null,d.available() ); for(int i = 0;d.available() == 0;i++){ a.add(array_count,(ArrayList) d.readObject()); array_count++; JOptionPane.showMessageDialog(null,"Haaaaai :D" ); JOptionPane.showMessageDialog(null,array_count ); } d.close(); JOptionPane.showMessageDialog(null,array_count +"1111" ); for(int i = 0 ; i<a.size()&& found!= true ; i++){ count++ ; p1 =(person)a.get(i); user=p1.user; pass = p1.pass; cas_checkpass(); } } it just print JOptionPane.showMessageDialog(null,d.available() ); and having excep. here a.add(array_count,(ArrayList) d.readObject()); p.s : person object from my own class and it's Serializabled

    Read the article

  • PHP/MySQL time zone migration

    - by El Yobo
    I have an application that currently stores timestamps in MySQL DATETIME and TIMESTAMP values. However, the application needs to be able to accept data from users in multiple time zones and show the timestamps in the time zone of other users. As such, this is how I plan to amend the application; I would appreciate any suggestions to improve the approach. Database modifications All TIMESTAMPs will be converted to DATETIME values; this is to ensure consistency in approach and to avoid having MySQL try to do clever things and convert time zones (I want to keep the conversion in PHP, as it involves less modification to the application, and will be more portable when I eventually manage to escape from MySQL). All DATETIME values will be adjusted to convert them to UTC time (currently all in Australian EST) Query modifications All usage of NOW() to be replaced with UTC_TIMESTAMP() in queries, triggers, functions, etc. Application modifications The application must store the time zone and preferred date format (e.g. US vs the rest of the world) All timestamps will be converted according to the user settings before being displayed All input timestamps will be converted to UTC according to the user settings before being input Additional notes Converting formats will be done at the application level for several main reasons The approach to converting time zones varies from DB to DB, so handing it there will be non-portable (and I really hope to be migrating away from MySQL some time in the not-to-distant future). MySQL TIMESTAMPs have limited ranges to the permitted dates (~1970 to ~2038) MySQL TIMESTAMPs have other undesirable attributes, including bizarre auto-update behaviour (if not carefully disabled) and sensitivity to the server zone settings (and I suspect I might screw these up when I migrate to Amazon later in the year). Is there anything that I'm missing here, or does anyone have better suggestions for the approach?

    Read the article

  • Laravel with Homestead

    - by Ahmed el-Gendy
    I new with virtual box and vagrant , Now I using Homestead image and every thing is run well but when i create my project named laravel on virtual machine it supposed that i see this new folder named laravel on my machine but i didn't get any thing on my machine , The synchronization is not working. NOTE: I'm using ubuntu 14.04 This is my homestead.yaml ip: "192.168.10.10" memory: 2048 cpus: 1 authorize: ~/.ssh/id_rsa.pub keys: - ~/.ssh/id_rsa folders: - map: /var/projects/ to: /home/vagrant/projects/ sites: - map: homestead.app to: /home/vagrant/projects/laravel/public variables: - key: APP_ENV value: local thanks advance

    Read the article

  • Rails 4 json return on API

    - by El - Key
    I'm creating an API on my application. I currently overrided the as_json method in my model in order to be able to get attached files as well as logo from Paperclip : def as_json( options = {} ) super.merge(logo_small: self.logo.url(:small), logo_large: self.logo.url(:large), taxe: self.taxe, attachments: self.attachments) end Then within my controller, I'm doing : def index @products = current_user.products respond_with @products end def show respond_with @product end The problem is that on the index, I don't want get all the attachments. I only need it on the show method. So I tried it : def index @products = current_user.products respond_with @products, except: [:attachments] end But unfortunately it's not working. How can I do to not send :attachments? Thanks

    Read the article

  • How to determine item at a index in pattern

    - by el.gringogrande
    I have the following elements in a list/array a1,a2,a3 and these elements are used to build another list in a predictable pattern example a1,a1,a2,a2,a3,a3,a1,a1,a2,a2,a3,a3... The pattern may change but I will always know how many times each element repeats and all elements repeats the same number of times. And the elements always show up in the same order. so another pattern might be a1,a1,a1,a2,a2,a2,a3,a3,a3,a1,a1,a1,a2,a2,a2,a3,a3,a3... or a1,a2,a3,a1,a2,a3 it will never be a2,a2,a1,a1,a3,a3... or a1,a2,a3,a2,a3,a1 etc How I determine what element is at any index in the list?

    Read the article

  • Stretching DIV to 100% height and width of window but not less than 800x600px

    - by El Eme
    I have a page that needs to stretch and resize with with window and I've managed to do that but I need that the "inner div" (#pgContent) stretch if the window is resized to higher dimensions but that it doesn't shrink more than, let's say for example 800 x 600 px. As I have it now it stretches well but it also shrinks more than I want! It's working as I want in width but not in height!? Here's a visual example: My CSS: * { margin: 0; padding: 0; outline: 0; } /*| PAGE LAYOUT |*/ html, body { height: 100%; width: 100%; /*text-align: center;*/ /*IE doesn't ~like this*/ cursor: default; } #pgWrapper { z-index: 1; min-height: 100%; height: auto !important; /*min-height: 600px;*/ /* THIS SHOULD WORK BUT IT DOESN'T */ height: 100%; min-width: 1000px; width: 100%; position: absolute; overflow: hidden; text-align: center; background: #000; } #pgContent { position: absolute; top: 30px; left: 30px; right: 30px; bottom: 50px; overflow: hidden; background: #CCC; } #footWrapper { z-index: 2; height: 50px; min-width: 940px; position: absolute; left: 30px; right: 30px; bottom: 0px; background: #C00; } /*| END PAGE LAYOUT |*/ And the HTML: <body> <div id="pgWrapper"> <div id="pgContent"> This DIV should stretch with window but never lower than for example 800px x 600px!<br /> If window size is lower then scrollbars should appear. </div> </div> <div id="footWrapper"> <div id="footLft"></div> <div id="footRgt"></div> </div> </body> If someone could give me a help on this I would appreciate. Thanks in advance.

    Read the article

  • Improving Javascript Load Times - Concatenation vs Many + Cache

    - by El Yobo
    I'm wondering which of the following is going to result in better performance for a page which loads a large amount of javascript (jQuery + jQuery UI + various other javascript files). I have gone through most of the YSlow and Google Page Speed stuff, but am left wondering about a particular detail. A key thing for me here is that the site I'm working on is not on the public net; it's a business to business platform where almost all users are repeat visitors (and therefore with caches of the data, which is something that YSlow assumes will not be the case for a large number of visitors). First up, the standard approach recommended by tools such as YSlow is to concatenate it, compress it, and serve it up in a single file loaded at the end of your page. This approach sounds reasonably effective, but I think that a key part of the reasoning here is to improve performance for users without cached data. The system I currently have is something like this * All javascript files are compressed and loaded at the bottom of the page * All javascript files have far future cache expiration dates, so will remain (for most users) in the cache for a long time * Pages only load the javascript files that they require, rather than loading one monolithic file, most of which will not be required Now, my understanding is that, if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all. If this is correct, I would assume that having multiple tags is not causing any performance penalty, as I'm still not having any additional requests on most pages (recalling from above that almost all users have populated caches). In addition to this, not loading the JS means that the browser doesn't have to interpret or execute all this additional code which it isn't going to need; as a B2B application, most of our users are unfortunately stuck with IE6 and its painfully slow JS engine. Another benefit is that, when code changes, only the affected files need to be fetched again, rather than the whole set (granted, it would only need to be fetched once, so this is not so much of a benefit). I'm also looking at using LabJS to allow for parallel loading of the JS when it's not cached. So, what do people think is a better approach? In a similar vein, what do you think about a similar approach to CSS - is monolithic better?

    Read the article

  • Response as mail attachment

    - by el ninho
    using (MemoryStream stream = new MemoryStream()) { compositeLink.PrintingSystem.ExportToPdf(stream); Response.Clear(); Response.Buffer = false; Response.AppendHeader("Content-Type", "application/pdf"); Response.AppendHeader("Content-Transfer-Encoding", "binary"); Response.AppendHeader("Content-Disposition", "attachment; filename=test.pdf"); Response.BinaryWrite(stream.GetBuffer()); Response.End(); } I got this working fine. Next step is to send this pdf file to mail, as attachment using (MemoryStream stream = new MemoryStream()) { compositeLink.PrintingSystem.ExportToPdf(stream); Response.Clear(); Response.Buffer = false; Response.AppendHeader("Content-Type", "application/pdf"); Response.AppendHeader("Content-Transfer-Encoding", "binary"); Response.AppendHeader("Content-Disposition", "attachment; filename=test.pdf"); Response.BinaryWrite(stream.GetBuffer()); System.Net.Mail.MailMessage message = new System.Net.Mail.MailMessage(); message.To.Add("[email protected]"); message.Subject = "Subject"; message.From = new System.Net.Mail.MailAddress("[email protected]"); message.Body = "Body"; message.Attachments.Add(Response.BinaryWrite(stream.GetBuffer())); System.Net.Mail.SmtpClient smtp = new System.Net.Mail.SmtpClient("192.168.100.100"); smtp.Send(message); Response.End(); } I have problem with this line: message.Attachments.Add(Response.BinaryWrite(stream.GetBuffer())); Any help how to get this to work? Thanks

    Read the article

  • Address of default RHEL YUM Repository

    - by El Guapo
    Does anyone know the address of the default RHEL repository? We rent a server that is hosted by a third-party and one of our users seems to have "deleted" a lot of the RHN packages from our box; included was the rhn-yum-plugin. I really don't want to waste one our support requests on having them load up the DVD into the box and re-initializing the RHN software. Thanks.

    Read the article

  • JS. How to replace html element with another element/text, represented in string?

    - by EL 2002
    I have a problem with replacing html elements. For example, there is a table <table><tr><td id="idTABLE">0</td><td>END</td></tr></table> (it can be div, span, anything) And string in JS script var str='<td>1</td><td>2</td>'; (it can be anything, '123 text', '<span123 element</span 456' or ' <tr<td123</td ' or anything) How can I replace element 'idTABLE' with str? I mean really replace, so <table><tr><td id="__TABLE__">0</td><td>END</td></tr></table> becomes <table><tr><td>1</td><td>2</td><td>END</td></tr></table> //str='<td>1</td><td>2</td>'; <table><tr>123 text<td>END</td></tr></table> //'123 text' <table><tr> tr><td>123</td> <td>END</td></tr></table> //' <tr><td>123</td> ' I tried with createElement, replaceChild, cloneNode, but with no result at all =(

    Read the article

  • Image classification: recognizing various features of many buildings from images

    - by el chief
    so, let's say i have the long/lat or address of many buildings can get satellite images, "street view", and perhaps 3d/perspective views of buildings. want to find: height, number of floors, floor area (max building footprint) of the building. about 200k buildings. Is there a library for recognizing buildings from satellite shots or pictures? Kind of like face detection I suppose. Any other suggestions? Thanks!

    Read the article

  • Java Insert Help...???

    - by El Classico
    I am trying to do an Insert, Update and Delete on a table in MS Access. Everything works fine for a SELECT statement. But when doing the other three operations, I don't seem to get any errors, but the actions are not reflected on to the DB. Please help... THe INSERT statement is as follows: PreparedStatement ps = con.prepareStatement("INSERT INTO Student VALUES (?, ?, ?, ?, ?, ?, ?, ?)"); ps.setInt(1,1); ps.setString(2,"ish"); ps.setInt(3,100); ps.setInt(4,100); ps.setInt(5,100); ps.setInt(6,300); ps.setInt(7,100); ps.setString(8,"A"); ps.executeUpdate(); Also may I know why PreparedStatement is used except for SELECT statement...

    Read the article

  • ADNOC talks about 50x increase in performance

    - by KLaker
    If you are still wondering about how Exadata can revolutionise your business then I would recommend watching this great video which was recorded at this year's OpenWorld. First a little background...The Abu Dhabi National Oil Company for Distribution (ADNOC) is an integrated energy company that was founded in 1973. ADNOC Distribution markets and distributes petroleum products and services within the United Arab Emirates and internationally. As one of the largest and most innovative government-owned petroleum companies in the Arab Gulf, ADNOC Distribution is renowned and respected for the exceptional quality and reliability of its products and services. Its five corporate divisions include more than 200 filling stations (a number that is growing at 8% annually), more than 150 convenience stores, 10 vehicle inspection stations, as well as wholesale and retail sales of bulk fuel, gas, oil, diesel, and lubricants. ADNOC selected Oracle Exadata Database Machine after extensive research because it provided them with a single platform that can run mixed workloads in a single unified machine: "We chose Oracle Exadata Database Machine because it.offered a fully integrated and highly engineered system that was ready to deploy. With our infrastructure running all the same technology, we can operate any type of Oracle Database without restrictions and be prepared for business growth," said Ali Abdul Aziz Al-Ali, IT division manager, ADNOC Distribution. ".....we could consolidate our transaction processing and business intelligence onto one platform. Competing solutions are just not capable of doing that." - Awad Ahmed Ali El-Sidiq, Senior Database Administrator, ADNOC Distribution In this new video Awad Ahmen Ali El Sidddig, Senior DBA at ADNOC, talks about the impact that Exadata has had on his team and the whole business. ADNOC is using our engineered systems to drive and manage all their workloads: from transaction systems to payments system to data warehouse to BI environment. A true Disk-to-Dashboard revolution using Engineered Systems. This engineered approach is delivering 50x improvement in performance with one queries running 100x faster! The IT has even revolutionised some of their data warehouse related processes with the help of Exadata and now jobs that were taking over 4 hours now run in a few minutes.  To watch the video click on the image below which will take you to our Oracle YouTube page: (if the above link does not work, click here: http://www.youtube.com/watch?v=zcRpxc6u5Ic) Now that queries are running 100x faster and jobs are completing in minutes not hours, what is next for the IT team at ADNOC? Like many of our customers ADNOC is now looking to take advantage of big data to help them better align their business operations with customer behaviour and customer insights. To help deliver this next level of insight the IT team is looking at the new features in Oracle Database 12c such as the new in-memory feature to deliver even more performance gains.  The great news is that Awad Ahmen Ali El Sidddig was awarded DBA of the Year - EMEA within our Data Warehouse Global Leaders programme and you can see the badge for this award pop-up at the start of video. Well done to everyone at ADNOC and thanks for spending the time with us at OOW to create this great video.

    Read the article

  • Migrating from SQL Trace to Extended Events

    - by extended_events
    In SQL Server codenamed “Denali” we are moving our diagnostic tracing capabilities forward by building a system on top of Extended Events. With every new system you face the specter of migration which is always a bit of a hassle. I’m obviously motivated to see everyone move their diagnostic tracing systems over to the new extended events based system, so I wanted to make sure we lowered the bar for the migration process to help ease your trials. In my initial post on Denali CTP 1 I described a couple tables that we created that will help map the existing SQL Trace Event Classes to the equivalent Extended Events events. In this post I’ll describe the tables in a bit more details, explain the relationship between the SQL Trace objects (Event Class & Column) and Extended Event objects (Events & Actions) and at the end provide some sample code for a managed stored procedure that will take an existing SQL Trace session (eg. a trace that you can see in sys.Traces) and converts it into event session DDL. Can you relate? In some ways, SQL Trace and Extended Events is kind of like the Standard and Metric measuring systems in the United States. If you spend too much time trying to figure out how to convert between the two it will probably make your head hurt. It’s often better to just use the new system without trying to translate between the two. That said, people like to relate new things to the things they’re comfortable with, so, with some trepidation, I will now explain how these two systems are related to each other. First, some terms… SQL Trace is made up of Event Classes and Columns. The Event Class occurs as the result of some activity in the database engine, for example, SQL:Batch Completed fires when a batch has completed executing on the server. Each Event Class can have any number of Columns associated with it and those Columns contain the data that is interesting about the Event Class, such as the duration or database name. In Extended Events we have objects named Events, EventData field and Actions. The Event (some people call this an xEvent but I’ll stick with Event) is equivalent to the Event Class in SQL Trace since it is the thing that occurs as the result of some activity taking place in the server. An  EventData field (from now on I’ll just refer to these as fields) is a piece of information that is highly correlated with the event and is always included as part of the schema of an Event. An Action is something that can be associated with any Event and it will cause some additional “action” to occur when ever the parent Event occurs. Actions can do a number of different things for example, there are Actions that collect additional data and, take memory dumps. When mapping SQL Trace onto Extended Events, Columns are covered by a combination of both fields and Actions. Knowing exactly where a Column is covered by a field and where it is covered by an Action is a bit of an art, so we created the mapping tables to make you an Artist without the years of practice. Let me draw you a map. Event Mapping The table dbo.trace_xe_event_map exists in the master database with the following structure: Column_name Type trace_event_id smallint package_name nvarchar xe_event_name nvarchar By joining this table sys.trace_events using trace_event_id and to the sys.dm_xe_objects using xe_event_name you can get a fair amount of information about how Event Classes are related to Events. The most basic query this lends itself to is to match an Event Class with the corresponding Event. SELECT     t.trace_event_id,     t.name [event_class],     e.package_name,     e.xe_event_name FROM sys.trace_events t INNER JOIN dbo.trace_xe_event_map e     ON t.trace_event_id = e.trace_event_id There are a couple things you’ll notice as you peruse the output of this query: For the most part, the names of Events are fairly close to the original Event Class; eg. SP:CacheMiss == sp_cache_miss, and so on. We’ve mostly stuck to a one to one mapping between Event Classes and Events, but there are a few cases where we have combined when it made sense. For example, Data File Auto Grow, Log File Auto Grow, Data File Auto Shrink & Log File Auto Shrink are now all covered by a single event named database_file_size_change. This just seemed like a “smarter” implementation for this type of event, you can get all the same information from this single event (grow/shrink, Data/Log, Auto/Manual growth) without having multiple different events. You can use Predicates if you want to limit the output to just one of the original Event Class measures. There are some Event Classes that did not make the cut and were not migrated. These fall into two categories; there were a few Event Classes that had been deprecated, or that just did not make sense, so we didn’t migrate them. (You won’t find an Event related to mounting a tape – sorry.) The second class is bigger; with rare exception, we did not migrate any of the Event Classes that were related to Security Auditing using SQL Trace. We introduced the SQL Audit feature in SQL Server 2008 and that will be the compliance and auditing feature going forward. Doing this is a very deliberate decision to support separation of duties for DBAs. There are separate permissions required for SQL Audit and Extended Events tracing so you can assign these tasks to different people if you choose. (If you’re wondering, the permission for Extended Events is ALTER ANY EVENT SESSION, which is covered by CONTROL SERVER.) Action Mapping The table dbo.trace_xe_action_map exists in the master database with the following structure: Column_name Type trace_column_id smallint package_name nvarchar xe_action_name nvarchar You can find more details by joining this to sys.trace_columns on the trace_column_id field. SELECT     c.trace_column_id,     c.name [column_name],     a.package_name,     a.xe_action_name FROM sys.trace_columns c INNER JOIN    dbo.trace_xe_action_map a     ON c.trace_column_id = a.trace_column_id If you examine this list, you’ll notice that there are relatively few Actions that map to SQL Trace Columns given the number of Columns that exist. This is not because we forgot to migrate all the Columns, but because much of the data for individual Event Classes is included as part of the EventData fields of the equivalent Events so there is no need to specify them as Actions. Putting it all together If you’ve spent a bunch of time figuring out the inner workings of SQL Trace, and who hasn’t, then you probably know that the typically set of Columns you find associated with any given Event Class in SQL Profiler is not fix, but is determine by the contents of the table sys.trace_event_bindings. We’ve used this table along with the mapping tables to produce a list of Event + Action combinations that duplicate the SQL Profiler Event Class definitions using the following query, which you can also find in the Books Online topic How To: View the Extended Events Equivalents to SQL Trace Event Classes. USE MASTER; GO SELECT DISTINCT    tb.trace_event_id,    te.name AS 'Event Class',    em.package_name AS 'Package',    em.xe_event_name AS 'XEvent Name',    tb.trace_column_id,    tc.name AS 'SQL Trace Column',    am.xe_action_name as 'Extended Events action' FROM (sys.trace_events te LEFT OUTER JOIN dbo.trace_xe_event_map em    ON te.trace_event_id = em.trace_event_id) LEFT OUTER JOIN sys.trace_event_bindings tb    ON em.trace_event_id = tb.trace_event_id LEFT OUTER JOIN sys.trace_columns tc    ON tb.trace_column_id = tc.trace_column_id LEFT OUTER JOIN dbo.trace_xe_action_map am    ON tc.trace_column_id = am.trace_column_id ORDER BY te.name, tc.name As you might imagine, it’s also possible to map an existing trace definition to the equivalent event session by judicious use of fn_trace_geteventinfo joined with the two mapping tables. This query extracts the list of Events and Actions equivalent to the trace with ID = 1, which is most likely the Default Trace. You can find this query, along with a set of other queries and steps required to migrate your existing traces over to Extended Events in the Books Online topic How to: Convert an Existing SQL Trace Script to an Extended Events Session. USE MASTER; GO DECLARE @trace_id int SET @trace_id = 1 SELECT DISTINCT el.eventid, em.package_name, em.xe_event_name AS 'event'    , el.columnid, ec.xe_action_name AS 'action' FROM (sys.fn_trace_geteventinfo(@trace_id) AS el    LEFT OUTER JOIN dbo.trace_xe_event_map AS em       ON el.eventid = em.trace_event_id) LEFT OUTER JOIN dbo.trace_xe_action_map AS ec    ON el.columnid = ec.trace_column_id WHERE em.xe_event_name IS NOT NULL AND ec.xe_action_name IS NOT NULL You’ll notice in the output that the list doesn’t include any of the security audit Event Classes, as I wrote earlier, those were not migrated. But wait…there’s more! If this were an infomercial there’d by some obnoxious guy next to me blogging “Well Mike…that’s pretty neat, but I’m sure you can do more. Can’t you make it even easier to migrate from SQL Trace?”  Needless to say, I’d blog back, in an overly excited way, “You bet I can' obnoxious blogger side-kick!” What I’ve got for you here is a Extended Events Team Blog only special – this tool will not be sold in any store; it’s a special offer for those of you reading the blog. I’ve wrapped all the logic of pulling the configuration information out of an existing trace and and building the Extended Events DDL statement into a handy, dandy CLR stored procedure. Once you load the assembly and register the procedure you just supply the trace id (from sys.traces) and provide a name for the event session. Run the procedure and out pops the DDL required to create an equivalent session. Any aspects of the trace that could not be duplicated are included in comments within the DDL output. This procedure does not actually create the event session – you need to copy the DDL out of the message tab and put it into a new query window to do that. It also requires an existing trace (but it doesn’t have to be running) to evaluate; there is no functionality to parse t-sql scripts. I’m not going to spend a bunch of time explaining the code here – the code is pretty well commented and hopefully easy to follow. If not, you can always post comments or hit the feedback button to send us some mail. Sample code: TraceToExtendedEventDDL   Installing the procedure Just in case you’re not familiar with installing CLR procedures…once you’ve compile the assembly you can load it using a script like this: -- Context to master USE master GO -- Create the assembly from a shared location. CREATE ASSEMBLY TraceToXESessionConverter FROM 'C:\Temp\TraceToXEventSessionConverter.dll' WITH PERMISSION_SET = SAFE GO -- Create a stored procedure from the assembly. CREATE PROCEDURE CreateEventSessionFromTrace @trace_id int, @session_name nvarchar(max) AS EXTERNAL NAME TraceToXESessionConverter.StoredProcedures.ConvertTraceToExtendedEvent GO Enjoy! -Mike

    Read the article

  • Simulación de carga productiva para anticipar errores

    - by [email protected]
    La presión por la agilidad en el día a día del negocio y por obtener siempre altos niveles de servicio hacen del manejo de la calidad un imperativo básico. Relacionado con ello, Oracle propone a través de su solución ATS (Application Testing Suite) servicios para cumplir con los objetivos de calidad. Oracle Functional Testing permitirá automatizar tediosas tareas de prueba reduciendo el nivel de esfuerzo dentro de los equipos de pruebas y garantizando calidad en cada cambio en los sistemas productivos. Oracle Load Testing permitirá simular carga productiva en los entornos y anticipar errores derivados de la concurrencia, congestión, rendimiento y falta de capacidad sin afectar a los usuarios finales. La suite de Oracle está probada y certificada sobre las siguientes plataformas: Siebel 7.x y 8.x, e-Business Suite 11i10 y superiores, Hyperion, Peoplesoft, JD Edwards, Aplicaciones Web, Web Services y sobre Base de Datos. Brochure: Oracle Load Testing

    Read the article

  • Oracle Hyperion Day

    - by Oracle Aplicaciones
    Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Oracle celebró el pasado 1 de Diciembre en la emblemática Torre Espacio de Madrid,  el Hyperion Day.Durante el evento tuvimos la oportunidad de conocer las últimas novedades de las soluciones financieras de Oracle.

    Read the article

  • ING Selecciona Oracle Fusion HCM para la Gestión del Capital Humano

    - by Noelia Gomez
    Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif";} La decisión de ING sobre la elección de Oracle Fusion HCM fue impulsada por su deseo continuo de fortalecer la función de RRHH para impulsar su negocio. Tras mucho tiempo como cliente de Oracle PeopleSoft HCM, evaluó el segmento de mercado de HCM antes de seleccionar Oracle Fusion HCM, con el objetivo de seguir utilizando las herramientas principales de recursos humanos y mantener su posición como uno de los principales empleadores en los Países Bajos. ING implementará Oracle Fusion HCM, en estrecha colaboración con NorthgateArinso, miembro de Oracle PartnerNetwork (Gold), que proporcionará los servicios de alojamiento y de Oracle Consulting. La firma de servicios financieros utilizará toda la suite de Oracle Fusion Global Human Resources y Oracle Talent Management incluyendo Oracle Fusion Goal, Oracle Fusion OTBI, Oracle Fusion Workforce Compensation, Oracle Fusion Benefits y Oracle Fusion Performance Management. Además, ING implementará Oracle Fusion Financials, aprovechando la plataforma unificada de Oracle Fusion Applications. “Como institución financiera líder y mejor empleador, ING considera a sus colaboradores el activo más estratégico. Continuar siendo “best-in-class” en recursos humanos, es un diferenciador competitivo para nosotros, y necesitamos una aplicación líder de HCM para complementar nuestros esfuerzos ", dijo Marijke Brunklaus, Director General de Recursos Humanos de ING Bank Países Bajos. "Siendo un antiguo usuario de Oracle PeopleSoft HCM, cuidadosamente hemos evaluado las posibles soluciones disponibles y elegimos Oracle Fusion HCM como nuestra futura plataforma. Las profundas capacidades globales y las herramientas principales de talento de Oracle Fusion HCM son una buena opción para ayudarnos a evolucionar continuamente en nuestro negocio". Puede conocer más sobre HCM Fusion : · Oracle Fusion Applications · Oracle Fusion Human Capital Management · Oracle PartnerNetwork · Oracle Consulting Services · Oracle Human Capital Management Blog · Oracle HCM on Twitter · Oracle HCM on Facebook

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >