Search Results

Search found 60096 results on 2404 pages for 'data distribution'.

Page 620/2404 | < Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >

  • Type patterns and generic classes in Haskell

    - by finnsson
    I'm trying to understand type patterns and generic classes in Haskell but can't seem to get it. Could someone explain it in laymen's terms? In [1] I've read that "To apply functions generically to all data types, we view data types in a uniform manner: except for basic predefined types such as Float, IO, and ?, every Haskell data type can be viewed as a labeled sum of possibly labeled products." and then Unit, :*: and :+: are mentioned. Are all data types in Haskell automatically versions of the above mentioned and if so how do I figure out how a specific data type is represented in terms of :*:, etc? The users guide for generic classes (ch. 7.16) at haskell.org doesn't mention the predefined types but shouldn't they be handled in every function if the type patterns should be exhaustive? [1] Comparing Approaches to Generic Programming in Haskell, Ralf Hinze, Johan Jeuring, and Andres Löh

    Read the article

  • SQL SERVER – Backing Up and Recovering the Tail End of a Transaction Log – Notes from the Field #042

    - by Pinal Dave
    [Notes from Pinal]: The biggest challenge which people face is not taking backup, but the biggest challenge is to restore a backup successfully. I have seen so many different examples where users have failed to restore their database because they made some mistake while they take backup and were not aware of the same. Tail Log backup was such an issue in earlier version of SQL Server but in the latest version of SQL Server, Microsoft team has fixed the confusion with additional information on the backup and restore screen itself. Now they have additional information, there are a few more people confused as they have no clue about this. Previously they did not find this as a issue and now they are finding tail log as a new learning. Linchpin People are database coaches and wellness experts for a data driven world. In this 42nd episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words, Backing Up and Recovering the Tail End of a Transaction Log. Many times when restoring a database over an existing database SQL Server will warn you about needing to make a tail end of the log backup. This might be your reminder that you have to choose to overwrite the database or could be your reminder that you are about to write over and lose any transactions since the last transaction log backup. You might be asking yourself “What is the tail end of the transaction log”. The tail end of the transaction log is simply any committed transactions that have occurred since the last transaction log backup. This is a very crucial part of a recovery strategy if you are lucky enough to be able to capture this part of the log. Most organizations have chosen to accept some amount of data loss. You might be shaking your head at this statement however if your organization is taking transaction logs backup every 15 minutes, then your potential risk of data loss is up to 15 minutes. Depending on the extent of the issue causing you to have to perform a restore, you may or may not have access to the transaction log (LDF) to be able to back up those vital transactions. For example, if the storage array or disk that holds your transaction log file becomes corrupt or damaged then you wouldn’t be able to recover the tail end of the log. If you do have access to the physical log file then you can still back up the tail end of the log. In 2013 I presented a session at the PASS Summit called “The Ultimate Tail Log Backup and Restore” and have been invited back this year to present it again. During this session I demonstrate how you can back up the tail end of the log even after the data file becomes corrupt. In my demonstration I set my database offline and then delete the data file (MDF). The database can’t become more corrupt than that. I attempt to bring the database back online to change the state to RECOVERY PENDING and then backup the tail end of the log. I can do this by specifying WITH NO_TRUNCATE. Using NO_TRUNCATE is equivalent to specifying both COPY_ONLY and CONTINUE_AFTER_ERROR. It as its name says, does not try to truncate the log. This is a great demo however how could I achieve backing up the tail end of the log if the failure destroys my entire instance of SQL and all I had was the LDF file? During my demonstration I also demonstrate that I can attach the log file to a database on another instance and then back up the tail end of the log. If I am performing proper backups then my most recent full, differential and log files should be on a server other than the one that crashed. I am able to achieve this task by creating new database with the same name as the failed database. I then set the database offline, delete my data file and overwrite the log with my good log file. I attempt to bring the database back online and then backup the log with NO_TRUNCATE just like in the first example. I encourage each of you to view my blog post and watch the video demonstration on how to perform these tasks. I really hope that none of you ever have to perform this in production, however it is a really good idea to know how to do this just in case. It really isn’t a matter of “IF” you will have to perform a restore of a production system but more of a “WHEN”. Being able to recover the tail end of the log in these sever cases could be the difference of having to notify all your business customers of data loss or not. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • CVE-2012-0444 Memory corruption vulnerability in Ogg Vorbis

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-0444 Memory corruption vulnerability 10.0 libvorbis Solaris 11 11/11 SRU 8.5 Solaris 10 SPARC: 148006-01 X86: 148007-01 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Get all Select Elements in a Form by referencing $(this) instead of $("form select")

    - by DaveDev
    Hi Guys I'm currently getting all the Select elements that exist in a form with the following: $("form").submit(function(event) { // gather data var data = GetSelectData($("form select")); // do submit $.post($(this).attr("action"), data, ..etc) }); Instead of passing in $("form select"), is there a way I can say something like $(this).children('select') // this doesn't work, btw to get all the select elements that exist within the context of the form the submit event is executing for? This will allow me to reduce my code to the following, moving all the functionality into a common function: $("form").submit(function(event) { GatherDataAndSubmit($(this)); }); function GatherDataAndSubmit(obj) { var data = GetSelectData(obj.children('select')); $.post(obj.attr("action"), data, ..etc) } Thanks Dave

    Read the article

  • Returned JSON is seemingly mixed up when using jQuery Ajax

    - by Niall Paterson
    I've a php script that has the following line: echo json_encode(array('success'=>'true','userid'=>$userid, 'data' => $array)); It returns the following: { "success": "true", "userid": "1", "data": [ { "id": "1", "name": "Trigger", "image": "", "subtitle": "", "description": "", "range1": null, "range2": null, "range3": null }, { "id": "2", "name": "DWS", "image": "", "subtitle": "", "description": "", "range1": null, "range2": null, "range3": null } ] } But when I call a jQuery ajax as below: $.ajax({ type: 'POST', url: 'url', crossDomain: true, data: {name: name}, success: function(success, userid, data) { if (success = true) { document.write(userid); document.write(success); } } }); The userid is 'success'. The actual success one works, its true. Is this malformed data being returned? Or is it simply my code? Thanks in advance, Niall

    Read the article

  • Active Directory Snapshots with Windows Server 2008

    Snapshots are a useful feature of Windows Server 2008. Taking a snapshot of Active Directory as a scheduled task can prove to be a wise precaution in case disaster strikes. Once they are mounted, they can be accessed by any LDAP tool which allows the user to specify a host name and port number. Ben Lye shows how you can restore attributes to a large numbers of broken distribution groups from a snapshot.

    Read the article

  • How to set parameters in Python zlib module

    - by fagricipni
    I want to write a Python program that makes PNG files. My big problem is with generating the CRC and the data in the IDAT chunk. Python 2.6.4 does have a zlib module, but there are extra settings needed. The PNG specification REQUIRES the IDAT data to be compressed with zlib's deflate method with a window size of 32768 bytes, but I can't find how to set those parameters in the Python zlib module. As for the CRC for each chunk, the zlib module documentation indicates that it contains a CRC function. I believe that calling that CRC function as crc32(data,-1) will generate the CRC that I need, though if necessary I can translate the C code given in the PNG specification. Note that I can generate the rest of the PNG file and the data that is to be compressed for the IDAT chunk, I just don't know how to properly compress the image data for the IDAT chunk after implementing the initial filtering step.

    Read the article

  • How can we order a column as int using hibernate criteria API?

    - by Satya
    Hi I want to fetch the data form data base using hibernate Criteria API. That data should be ordered by some column as number. This column is defined as varchar in DB. But I have to fetch as numberic. I am facing problem using criteria API as it is ordering like string onyly. Ex: I am getting data like 9, 8, 7, 6, 5, 4, 3, 2, 1,10 but i want data as 10,9,8,7,6,5,4,3,2,1 Is there any Hibernate methods to covert varchar to number like convert("some column",int ) or cast("some column",int) ?

    Read the article

  • Why Is my json-object from AJAX not understood by javascript, even with 'json' dataType?

    - by pete
    My js code simply gets a json object from my server, but I think it should be automatically parsed and turned into an object with properties, yet it's not allowing access properly. $.ajax({ type: 'POST', url: '/misc/json-sample.js', data: {href: path}, // THIS IS THE POST DATA THAT IS PASSED IN TO THE DRUPAL MENU CALL TO GET THE MENU... dataType: 'json', success: function (datax) { if (datax.debug) { alert('Debug data: ' + datax.debug); } else { alert('No debug data: ' + datax.toSource() ) ; } The /misc/json-sample.js file is: [ { "path": "examplemodule/parent1/child1/grandchild1", "title": "First grandchild option", "children": false } ] (I have also been trying to return that object from drupal as follows, and the same results.) Drupal version of misc/json-sample.js: $items[] = array( 'path' = 'examplemodule/parent1/child1/grandchild1', 'title' = t('First grandchild option'), 'debug' = t('debug me!'), 'children' = FALSE ); print drupal_to_js($items); What happens (in FF, which has the toSource() capability) is the alert with 'No debug data: [ { "path": "examplemodule/parent1/child1/grandchild1", "title": "First grandchild option", "children": false } ] Thanks

    Read the article

  • Problem upgrading 11.04

    - by Krazy_Kaos
    I've been trying to upgrade my ubuntu 11.04 desktop computer, but when I click on the ugrade button: I get this error: I've tryied to change my repositories, but it changes nothing in the error((on the "setting new software channel"). Can someone point me in the right direction? This is my sources.list: # deb http://ppa.launchpad.net/ailurus/ppa/ubuntu karmic main # disabled on upgrade to karmic # deb-src http://ppa.launchpad.net/ailurus/ppa/ubuntu karmic main # disabled on upgrade to karmic # deb cdrom:[Ubuntu 9.04 _Jaunty Jackalope_ - Release i386 (20090421.3)]/ jaunty main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://us.archive.ubuntu.com/ubuntu/ natty main restricted multiverse universe ## Major bug fix updates produced after the final release of the ## distribution. deb http://us.archive.ubuntu.com/ubuntu/ natty-updates main restricted multiverse universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb-src http://pt.archive.ubuntu.com/ubuntu/ jaunty-backports main restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu natty partner deb-src http://archive.canonical.com/ubuntu natty partner deb http://us.archive.ubuntu.com/ubuntu/ natty-security main restricted multiverse universe deb http://us.archive.ubuntu.com/ubuntu/ natty-proposed restricted main multiverse universe # deb http://deb.torproject.org/torproject.org karmic main # disabled on upgrade to maverick # deb-src http://deb.torproject.org/torproject.org karmic main # disabled on upgrade to maverick deb http://extras.ubuntu.com/ubuntu natty main #Third party developers repository

    Read the article

  • Mis à jour : Configuration d'une infrastructure à clés publiques à 2 niveaux sous Windows Server 200

    Configuration d'une infrastructure à clés publiques à 2 niveaux sous Windows 2008 (R2), par Michaël Todorovic Citation: Ce document va vous permettre de mettre en place une infrastructure à clés publiques (aussi nommée PKI) à deux niveaux grâce à Microsoft Windows 2008. Cette PKI pourra vous être utile dans beaucoup de cas : serveur web sécurisé, serveur de messagerie sécurisé, distribution automatisée de cert...

    Read the article

  • Nautilus 3.6 est une catastrophe pour le créateur de Linux Mint, qui présente Nemo, le fork du gestionnaire de fichiers

    Nautilus 3.6 est une catastrophe pour le créateur de Linux Mint qui présente Nemo, le fork du gestionnaire de fichiers Le ton est à la provocation dans le monde de l'open source. Après Miguel De Icaza, le créateur de l'environnement de bureau GNOME, qui a déclaré que Linux avait échoué sur le Desktop, s'attirant les foudres de Linus Torvalds, c'est au tour d'un autre acteur de l'open source de faire une déclaration toute aussi controversée. Clement Lefebvre, créateur et responsable du développement de la distribution Linux Mint vient de déclarer dans un billet de blog que Nautilus 3.6 est une catastrophe.

    Read the article

  • Copying a 14bit grayscale image (saved in long[]) to a pictureBox

    - by Itsik
    My camera gives me 14bit grayscale images, but the API's function returns a long* to the image data. (so i'm assuming 4 bytes for each pixel) My application is written in C++/CLI, and the pictureBox is of .NET type. I am currently using the BitmapData.LockBits() mechanism to gain pointer access to the image data, and using memcpy(bmpData.Scan0.ToPointer(), imageData, sizeof(long)*height*width) to copy the image data to the Bitmap. For now, the only PixelFormat that is working is 32bit RGB, and the image appears in shades of blue with contours. Trying to initialize the Bitmap as 16bppGrayscale isn't working. I would ideally want to cast the array from long to word and using a 16bit format (hoping the the 14bit data will be displayed properly) but I'm not sure if this works. Also, I don't want to iterate over the image data, so finding the min/max and then histogram stretching to [0..255] isnt an option for me (the display must be as efficient as possible) Thanks

    Read the article

  • Can't install Ubuntu one

    - by Yehonatan Tsirolnik
    While trying to install Ubuntu one it throwes me this error - W:Failed to fetch http://ppa.launchpad.net/rabbitvcs/ppa/ubuntu/dists/DISTIBUTION/main/binary-amd64/PAckages 404 Not Found, W:Failed to fetch http://ppa.launchpad.net/rabbitvcs/ppa/ubuntu/dists/DISTRIBUTION/main/binary-i386/Packages 404 Not Found, E:Some index files failed to download. They have been ignored, or old ones used instead. and I can't install it Thanks.

    Read the article

  • SQL Server - Rebuilding Indexes

    - by Renso
    Goal: Rebuild indexes in SQL server. This can be done one at a time or with the example script below to rebuild all index for a specified table or for all tables in a given database. Why? The data in indexes gets fragmented over time. That means that as the index grows, the newly added rows to the index are physically stored in other sections of the allocated database storage space. Kind of like when you load your Christmas shopping into the trunk of your car and it is full you continue to load some on the back seat, in the same way some storage buffer is created for your index but once that runs out the data is then stored in other storage space and your data in your index is no longer stored in contiguous physical pages. To access the index the database manager has to "string together" disparate fragments to create the full-index and create one contiguous set of pages for that index. Defragmentation fixes that. What does the fragmentation affect?Depending of course on how large the table is and how fragmented the data is, can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.Which index to rebuild?As a rule consider that when reorganize a table's clustered index, all other non-clustered indexes on that same table will automatically be rebuilt. A table can only have one clustered index.How to rebuild all the index for one table:The DBCC DBREINDEX command will not automatically rebuild all of the indexes on a given table in a databaseHow to rebuild all indexes for all tables in a given database:USE [myDB]    -- enter your database name hereDECLARE @tableName varchar(255)DECLARE TableCursor CURSOR FORSELECT table_name FROM information_schema.tablesWHERE table_type = 'base table'OPEN TableCursorFETCH NEXT FROM TableCursor INTO @tableNameWHILE @@FETCH_STATUS = 0BEGINDBCC DBREINDEX(@tableName,' ',90)     --a fill factor of 90%FETCH NEXT FROM TableCursor INTO @tableNameENDCLOSE TableCursorDEALLOCATE TableCursorWhat does this script do?Reindexes all indexes in all tables of the given database. Each index is filled with a fill factor of 90%. While the command DBCC DBREINDEX runs and rebuilds the indexes, that the table becomes unavailable for use by your users temporarily until the rebuild has completed, so don't do this during production  hours as it will create a shared lock on the tables, although it will allow for read-only uncommitted data reads; i.e.e SELECT.What is the fill factor?Is the percentage of space on each index page for storing data when the index is created or rebuilt. It replaces the fill factor when the index was created, becoming the new default for the index and for any other nonclustered indexes rebuilt because a clustered index is rebuilt. When fillfactor is 0, DBCC DBREINDEX uses the fill factor value last specified for the index. This value is stored in the sys.indexes catalog view. If fillfactor is specified, table_name and index_name must be specified. If fillfactor is not specified, the default fill factor, 100, is used.How do I determine the level of fragmentation?Run the DBCC SHOWCONTIG command. However this requires you to specify the ID of both the table and index being. To make it a lot easier by only requiring you to specify the table name and/or index you can run this script:DECLARE@ID int,@IndexID int,@IndexName varchar(128)--Specify the table and index namesSELECT @IndexName = ‘index_name’    --name of the indexSET @ID = OBJECT_ID(‘table_name’)  -- name of the tableSELECT @IndexID = IndIDFROM sysindexesWHERE id = @ID AND name = @IndexName--Show the level of fragmentationDBCC SHOWCONTIG (@id, @IndexID)Here is an example:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 915- Extents Scanned..............................: 119- Extent Switches..............................: 281- Avg. Pages per Extent........................: 7.7- Scan Density [Best Count:Actual Count].......: 40.78% [115:282]- Logical Scan Fragmentation ..................: 16.28%- Extent Scan Fragmentation ...................: 99.16%- Avg. Bytes Free per Page.....................: 2457.0- Avg. Page Density (full).....................: 69.64%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's important here?The Scan Density; Ideally it should be 100%. As time goes by it drops as fragmentation occurs. When the level drops below 75%, you should consider re-indexing.Here are the results of the same table and clustered index after running the script:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 692- Extents Scanned..............................: 87- Extent Switches..............................: 86- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 100.00% [87:87]- Logical Scan Fragmentation ..................: 0.00%- Extent Scan Fragmentation ...................: 22.99%- Avg. Bytes Free per Page.....................: 639.8- Avg. Page Density (full).....................: 92.10%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's different?The Scan Density has increased from 40.78% to 100%; no fragmentation on the clustered index. Note that since we rebuilt the clustered index, all other index were also rebuilt.

    Read the article

  • SQLite doesn't have booleans or date-times.

    - by DanM
    I've been thinking about using SQLite for my next project, but I'm concerned that it seems to lack proper datetime and bit data types. If I use DbLinq (or some other ORM) to generate C# classes, will the data types of the properties be "dumbed down"? Will date-time data be placed in properties of type string or double? Will boolean data be placed in properties of type int? If yes, what are the implications? I'm envisioning a scenario where I need to write a whole second layer of classes with more specific data types and do a bunch of transformations and casts, but maybe it's not as bad as I fear. If you have any experience with this or a similar scenario, how did you handle it?

    Read the article

  • Parsing dbpedia JSON in Python

    - by givp
    Hello, I'm trying to get my head around the dbpedia JSON schema and can't figure out an efficient way of extracting a specific node: This is what dbpedia gives me: http://dbpedia.org/data/Ceramic_art.json I've got the whole thing as a JSON object in Python but don't really understand how to get the english abstract from this data. I've gotten this far: u = "http://dbpedia.org/data/Ceramic_art.json" data = urlfetch.fetch(url=u) json_data = json.loads(data.content) for j in json_data["http://dbpedia.org/resource/Ceramic_art"]: if(j == "http://dbpedia.org/ontology/abstract"): print "it's here" Not sure how to proceed from here. As you can see there are multiple languages. I need to get the english abstract. Thanks for your help, g

    Read the article

  • Is it dangerous to store user-enterable text into a hidden form via javascript?

    - by KallDrexx
    In my asp.net MVC application I am using in place editors to allow users to edit fields without having a standard form view. Unfortunately, since I am using Linq to Sql combined with my data mapping layer I cannot just update one field at a time and instead need to send all fields over at once. So the solution I came up with was to store all my model fields into hidden fields, and provide span tags that contain the visible data (these span tags become editable due to my jquery plugin). When a user triggers a save of their edits of a field, jquery then takes their value and places it in the hidden form, and sends the whole form to the server to commit via ajax. When the data goes into the hidden field originally (page load) and into the span tags the data is properly encoded, but upon the user changing the data in the contenteditable span field, I just run $("#hiddenfield").val($("#spanfield").html(); Am I opening any holes this method? Obviously the server also properly encodes stuff prior to database entry.

    Read the article

  • The Five Best things coming in Fedora 13 Linux

    <b>Cyber Cynic:</b> "Paul W. Frields, the Fedora Project leader, told me though that this release is much new user-friendly and that it's no longer just for experienced Linux users. Based on my early look at this Red Hat community Linux distribution, I agree."

    Read the article

  • MS Access Mark Duplicates in order of appearance - using the function RankOfDup: (SELECT Count(*) ...)

    - by veska stoyanova
    I'm trying to create Ranking that shows the sequence of agreements for the two fields Customers and Agreements. The number for agreements must be unique, whereas customers can repeat. The formula RankOfDup: (SELECT Count(*) FROM Data a WHERE a.customer=Data.customer And a.agreement >= Data.agreement) Works beautifully but after this query with columns Agreement, Customer and RankofDup, I need to create crosstab that transposes the RankofDub. It works when I make the table first and then create query but my data is too large so I'm trying to put the select query with the ranking in a crosstab query. However, when I try to do this Access gives error message that microsoft jet ... doesn't recognise Data.customer? Any ideas how I can fix this?

    Read the article

  • LINQ - Select Statement - The null value cannot be assigned to a member with type System.Int32 which

    - by thiag0
    I am trying to achieve the following... _4S.NJB_Request request = (from r in db.NJB_Requests where r.RequestId == referenceId select r).Take(1).SingleOrDefault(); Getting the following exception... Message: The null value cannot be assigned to a member with type System.Int32 which is a non-nullable value type. StackTrace: at System.Data.Linq.SqlClient.SqlProvider.Execute(Expression query, QueryInfo queryInfo, IObjectReaderFactory factory, Object[] parentArgs, Object[] userArgs, ICompiledSubQuery[] subQueries, Object lastResult) at System.Data.Linq.SqlClient.SqlProvider.ExecuteAll(Expression query, QueryInfo[] queryInfos, IObjectReaderFactory factory, Object[] userArguments, ICompiledSubQuery[] subQueries) at System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query) at System.Data.Linq.DataQuery`1.System.Linq.IQueryProvider.Execute[S](Expression expression) at System.Linq.Queryable.SingleOrDefault[TSource](IQueryable`1 source) at DAL.SqlDataProvider.MarkNJBPCRequestAsComplete(Int32 referenceId, Int32 processState) I have verified that 'referenceId' does have a value. Anyone know why this would happen in a select statement? Thanks!

    Read the article

  • Multiple Denial of Service (DoS) vulnerabilities in Apache Tomcat

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2011-4858 Resource Management Errors vulnerability 5.0 Apache Tomcat Solaris 11 11/11 SRU 4 Solaris 10 SPARC: 122911-29 X86: 122912-29 Solaris 9 Contact Support CVE-2012-0022 Numeric Errors vulnerability 5.0 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • PHP: best practice. Do i save html tags in DB or store the html entity value?

    - by Matt
    Hi Guys, I was wondering about which way i should do the following. I am using the tiny MCE wysiwyg editor which formats the users data with the right html tags. Now, i need to save this data entered into the editor into a database table. Should i encode the html tags to their corresponding entities when inserting into the DB, then when i get the data back from the table, not have the encode it for XSS purposes but i'd still have to use eval for the html tags to format the text. OR Do i save the html tags into the database, then when i get the data back from the database encode the html tags to their entities, but then as the tags will appear to the user, i'd have to use the eval function to actually format the data as it was entered. My thoughts are with the first option, i just wondered on what you guys thought. Thanks M

    Read the article

  • Is it OK to pass SQLCommand as a parameter?

    - by TooFat
    I have a Business Layer that passes a Conn string and a SQLCommand to a Data Layer like so public void PopulateLocalData() { System.Data.SqlClient.SqlCommand cmd = new System.Data.SqlClient.SqlCommand(); cmd.CommandType = System.Data.CommandType.StoredProcedure; cmd.CommandText = "usp_PopulateServiceSurveyLocal"; DataLayer.DataProvider.ExecSQL(ConnString, cmd); } The DataLayer then just executes the sql like so public static int ExecSQL(string sqlConnString, System.Data.SqlClient.SqlCommand cmd) { int rowsAffected; using (SqlConnection conn = new SqlConnection(sqlConnString)) { conn.Open(); cmd.Connection = conn; rowsAffected = cmd.ExecuteNonQuery(); cmd.Dispose(); } return rowsAffected; } Is it OK for me to pass the SQLCommand as a parameter like this or is there a better more accepted way of doing it. One of my concerns is if an error occurs when executing the query the cmd.dispose line will never execute. Does that mean it will continue to use up memory that will never be released?

    Read the article

< Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >