Search Results

Search found 19479 results on 780 pages for 'total record count'.

Page 11/780 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Count Email Address Domains

    - by BRADINO
    A quick tidbit I came up with today to count email addresses in a mysql database table grouping them by domain. So say for example you have a large list of subscribers and you want to see the breakdown of people who use Hotmail, Yahoo, Gmail, etc. SELECT COUNT( SUBSTRING_INDEX( `email` , '@', -1 ) ) AS `count` , SUBSTRING_INDEX( `email` , '@', -1 ) AS `domain` FROM `subscribers` WHERE `email` != '' GROUP BY `domain` ORDER BY `count` DESC This sql statement assumes that the table is called 'subscribers' and the column containing the email addresses is 'email'. Change these two values to match your table name and email address column name. mysql count email mysql count domain mysql split email mysql split domain

    Read the article

  • Cisco Pix 501 - reaching local host limit, showing odd IP addresses

    - by cdonner
    I am running out of licenses on my Pix 501, and the show local-host command lists a number of odd IP addresses that do not belong to my 10.10.1.* subnet. Any idea what they are? The only thing I could find was a potential ISP: DINSA is Defence Interoperable Network Services Authority, an agency of the Ministry of Defence of the United Kingdom. Does not sound right. I don't see any active connections, though. I can't ping or traceroute these IPs, but they reappear after I clear the list, with various other addresses in that general range, up until the connection limit is reached. Based on the number denied, I suppose I would have a lot more of them had I not the connection limit. Very dubious. Is anybody else seeing this? pixfirewall# show local-host Interface inside: 10 active, 10 maximum active, **118 denied** local host: <10.10.1.110>, TCP connection count/limit = 0/unlimited TCP embryonic count = 0 TCP intercept watermark = unlimited UDP connection count/limit = 0/unlimited AAA: Xlate(s): Conn(s): local host: <10.10.1.176>, TCP connection count/limit = 0/unlimited TCP embryonic count = 0 TCP intercept watermark = unlimited UDP connection count/limit = 0/unlimited AAA: Xlate(s): Conn(s): local host: <10.10.1.170>, TCP connection count/limit = 0/unlimited TCP embryonic count = 0 TCP intercept watermark = unlimited UDP connection count/limit = 1/unlimited AAA: Xlate(s): Conn(s): local host: <10.10.1.175>, TCP connection count/limit = 11/unlimited TCP embryonic count = 0 TCP intercept watermark = unlimited UDP connection count/limit = 1/unlimited AAA: Xlate(s): Conn(s): local host: <10.10.1.108>, TCP connection count/limit = 0/unlimited TCP embryonic count = 0 TCP intercept watermark = unlimited UDP connection count/limit = 0/unlimited AAA: Xlate(s): Conn(s): local host: <25.33.41.115>, // ???????????????? what is this? TCP connection count/limit = 0/unlimited TCP embryonic count = 0 TCP intercept watermark = unlimited UDP connection count/limit = 0/unlimited AAA: Xlate(s): Conn(s): local host: <25.33.226.124>, // ???????????????? what is this? TCP connection count/limit = 0/unlimited TCP embryonic count = 0 TCP intercept watermark = unlimited UDP connection count/limit = 0/unlimited AAA: Xlate(s): Conn(s): local host: <10.10.1.172>, TCP connection count/limit = 0/unlimited TCP embryonic count = 0 TCP intercept watermark = unlimited UDP connection count/limit = 0/unlimited AAA: Xlate(s): Conn(s): local host: <25.36.114.91>, // ???????????????? what is this? TCP connection count/limit = 0/unlimited TCP embryonic count = 0 TCP intercept watermark = unlimited UDP connection count/limit = 0/unlimited AAA: Xlate(s): Conn(s): local host: <10.10.1.109>, TCP connection count/limit = 0/unlimited TCP embryonic count = 0 TCP intercept watermark = unlimited UDP connection count/limit = 0/unlimited AAA: Xlate(s): Conn(s): pixfirewall#

    Read the article

  • Approach for packing 2D shapes while minimizing total enclosing area

    - by Dennis
    Not sure on my tags for this question, but in short .... I need to solve a problem of packing industrial parts into crates while minimizing total containing area. These parts are motors, or pumps, or custom-made components, and they have quite unusual shapes. For some, it may be possible to assume that a part === rectangular cuboid, but some are not so simple, i.e. they assume a shape more of that of a hammer or letter T. With those, (assuming 2D shape), by alternating direction of top & bottom, one can pack more objects into the same space, than if all tops were in the same direction. Crude example below with letter "T"-shaped parts: ***** xxxxx ***** x ***** *** ooo * x vs * x vs * x vs * x o * x * xxxxx * x * x o xxxxx xxx Right now we are solving the problem by something like this: using CAD software, make actual models of how things fit in crate boxes make estimates of actual crate dimensions & write them into Excel file (1) is crazy amount of work and as the result we have just a limited amount of possible entries in (2), the Excel file. The good things is that programming this is relatively easy. Given a combination of products to go into crates, we do a lookup, and if entry exists in the Excel (or Database), we bring it out. If it doesn't, we say "sorry, no data!". I don't necessarily want to go full force on making up some crazy algorithm that given geometrical part description can align, rotate, and figure out best part packing into a crate, given its shape, but maybe I do.. Question Well, here is my question: assuming that I can represent my parts as 2D (to be determined how), and that some parts look like letter T, and some parts look like rectangles, which algorithm can I use to give me a good estimate on the dimensions of the encompassing area, while ensuring that the parts are packed in a minimal possible area, to minimize crating/shipping costs? Are there approximation algorithms? Seeing how this can get complex, is there an existing library I could use? My thought / Approach My naive approach would be to define a way to describe position of parts, and place the first part, compute total enclosing area & dimensions. Then place 2nd part in 0 degree orientation, repeat, place it at 180 degree orientation, repeat (for my case I don't think 90 degree rotations will be meaningful due to long lengths of parts). Proceed using brute force "tacking on" other parts to the enclosing area until all parts are processed. I may have to shift some parts a tad (see 3rd pictorial example above with letters T). This adds a layer of 2D complexity rather than 1D. I am not sure how to approach this. One idea I have is genetic algorithms, but I think those will take up too much processing power and time. I will need to look out for shape collisions, as well as adding extra padding space, since we are talking about real parts with irregularities rather than perfect imaginary blocks. I'm afraid this can get geometrically messy fairly fast, and I'd rather keep things simple, if I can. But what if the best (practical) solution is to pack things into different crate boxes rather than just one? This can get a bit more tricky. There is human element involved as well, i.e. like parts can go into same box and are thus a constraint to be considered. Some parts that are not the same are sometimes grouped together for shipping and can be considered as a common grouped item. Sometimes customers want things shipped their way, which adds human element to constraints. so there will have to be some customization.

    Read the article

  • how to create a DataAccessLayer ?

    - by NIGHIL DAS
    hi, i am creating a database applicatin in .Net. I am using a DataAccessLayer for communicating .net objects with database but i am not sure that this class is correct or not Can anyone cross check it and rectify any mistakes namespace IDataaccess { #region Collection Class public class SPParamCollection : List<SPParams> { } public class SPParamReturnCollection : List<SPParams> { } #endregion #region struct public struct SPParams { public string Name { get; set; } public object Value { get; set; } public ParameterDirection ParamDirection { get; set; } public SqlDbType Type { get; set; } public int Size { get; set; } public string TypeName { get; set; } // public string datatype; } #endregion /// <summary> /// Interface DataAccess Layer implimentation New version /// </summary> public interface IDataAccess { DataTable getDataUsingSP(string spName); DataTable getDataUsingSP(string spName, SPParamCollection spParamCollection); DataSet getDataSetUsingSP(string spName); DataSet getDataSetUsingSP(string spName, SPParamCollection spParamCollection); SqlDataReader getDataReaderUsingSP(string spName); SqlDataReader getDataReaderUsingSP(string spName, SPParamCollection spParamCollection); int executeSP(string spName); int executeSP(string spName, SPParamCollection spParamCollection, bool addExtraParmas); int executeSP(string spName, SPParamCollection spParamCollection); DataTable getDataUsingSqlQuery(string strSqlQuery); int executeSqlQuery(string strSqlQuery); SPParamReturnCollection executeSPReturnParam(string spName, SPParamReturnCollection spParamReturnCollection); SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection); SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection, bool addExtraParmas); int executeSPReturnParam(string spName, SPParamCollection spParamCollection, ref SPParamReturnCollection spParamReturnCollection); object getScalarUsingSP(string spName); object getScalarUsingSP(string spName, SPParamCollection spParamCollection); } } using IDataaccess; namespace Dataaccess { /// <summary> /// Class DataAccess Layer implimentation New version /// </summary> public class DataAccess : IDataaccess.IDataAccess { #region Public variables static string Strcon; DataSet dts = new DataSet(); public DataAccess() { Strcon = sReadConnectionString(); } private string sReadConnectionString() { try { //dts.ReadXml("C:\\cnn.config"); //Strcon = dts.Tables[0].Rows[0][0].ToString(); //System.Configuration.Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); //Strcon = config.ConnectionStrings.ConnectionStrings["connectionString"].ConnectionString; // Add an Application Setting. //Strcon = "Data Source=192.168.50.103;Initial Catalog=erpDB;User ID=ipixerp1;Password=NogoXVc3"; Strcon = System.Configuration.ConfigurationManager.AppSettings["connection"]; //Strcon = System.Configuration.ConfigurationSettings.AppSettings[0].ToString(); } catch (Exception) { } return Strcon; } public SqlConnection connection; public SqlCommand cmd; public SqlDataAdapter adpt; public DataTable dt; public int intresult; public SqlDataReader sqdr; #endregion #region Public Methods public DataTable getDataUsingSP(string spName) { return getDataUsingSP(spName, null); } public DataTable getDataUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); dt = new DataTable(); adpt.Fill(dt); return (dt); } } } finally { connection.Close(); } } public DataSet getDataSetUsingSP(string spName) { return getDataSetUsingSP(spName, null); } public DataSet getDataSetUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); DataSet ds = new DataSet(); adpt.Fill(ds); return ds; } } } finally { connection.Close(); } } public SqlDataReader getDataReaderUsingSP(string spName) { return getDataReaderUsingSP(spName, null); } public SqlDataReader getDataReaderUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; sqdr = cmd.ExecuteReader(); return (sqdr); } } } finally { connection.Close(); } } public int executeSP(string spName) { return executeSP(spName, null); } public int executeSP(string spName, SPParamCollection spParamCollection, bool addExtraParmas) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { SqlParameter par = new SqlParameter(spParamCollection[count].Name, spParamCollection[count].Value); if (addExtraParmas) { par.TypeName = spParamCollection[count].TypeName; par.SqlDbType = spParamCollection[count].Type; } cmd.Parameters.Add(par); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; return (cmd.ExecuteNonQuery()); } } } finally { connection.Close(); } } public int executeSP(string spName, SPParamCollection spParamCollection) { return executeSP(spName, spParamCollection, false); } public DataTable getDataUsingSqlQuery(string strSqlQuery) { try { using (connection = new SqlConnection(Strcon)) connection.Open(); { using (cmd = new SqlCommand(strSqlQuery, connection)) { cmd.CommandType = CommandType.Text; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); dt = new DataTable(); adpt.Fill(dt); return (dt); } } } finally { connection.Close(); } } public int executeSqlQuery(string strSqlQuery) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(strSqlQuery, connection)) { cmd.CommandType = CommandType.Text; cmd.CommandTimeout = 60; intresult = cmd.ExecuteNonQuery(); return (intresult); } } } finally { connection.Close(); } } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamReturnCollection spParamReturnCollection) { return executeSPReturnParam(spName, null, spParamReturnCollection); } public int executeSPReturnParam() { return 0; } public int executeSPReturnParam(string spName, SPParamCollection spParamCollection, ref SPParamReturnCollection spParamReturnCollection) { try { SPParamReturnCollection spParamReturned = new SPParamReturnCollection(); using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; foreach (SPParams paramReturn in spParamReturnCollection) { SqlParameter _parmReturn = new SqlParameter(paramReturn.Name, paramReturn.Size); _parmReturn.Direction = paramReturn.ParamDirection; if (paramReturn.Size > 0) _parmReturn.Size = paramReturn.Size; else _parmReturn.Size = 32; _parmReturn.SqlDbType = paramReturn.Type; cmd.Parameters.Add(_parmReturn); } cmd.CommandTimeout = 60; intresult = cmd.ExecuteNonQuery(); connection.Close(); //for (int i = 0; i < spParamReturnCollection.Count; i++) //{ // spParamReturned.Add(new SPParams // { // Name = spParamReturnCollection[i].Name, // Value = cmd.Parameters[spParamReturnCollection[i].Name].Value // }); //} } } return intresult; } finally { connection.Close(); } } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection) { return executeSPReturnParam(spName, spParamCollection, spParamReturnCollection, false); } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection, bool addExtraParmas) { try { SPParamReturnCollection spParamReturned = new SPParamReturnCollection(); using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { //cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); SqlParameter par = new SqlParameter(spParamCollection[count].Name, spParamCollection[count].Value); if (addExtraParmas) { par.TypeName = spParamCollection[count].TypeName; par.SqlDbType = spParamCollection[count].Type; } cmd.Parameters.Add(par); } cmd.CommandType = CommandType.StoredProcedure; foreach (SPParams paramReturn in spParamReturnCollection) { SqlParameter _parmReturn = new SqlParameter(paramReturn.Name, paramReturn.Value); _parmReturn.Direction = paramReturn.ParamDirection; if (paramReturn.Size > 0) _parmReturn.Size = paramReturn.Size; else _parmReturn.Size = 32; _parmReturn.SqlDbType = paramReturn.Type; cmd.Parameters.Add(_parmReturn); } cmd.CommandTimeout = 60; cmd.ExecuteNonQuery(); connection.Close(); for (int i = 0; i < spParamReturnCollection.Count; i++) { spParamReturned.Add(new SPParams { Name = spParamReturnCollection[i].Name, Value = cmd.Parameters[spParamReturnCollection[i].Name].Value }); } } } return spParamReturned; } catch (Exception ex) { return null; } finally { connection.Close(); } } public object getScalarUsingSP(string spName) { return getScalarUsingSP(spName, null); } public object getScalarUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); cmd.CommandTimeout = 60; } cmd.CommandType = CommandType.StoredProcedure; return cmd.ExecuteScalar(); } } } finally { connection.Close(); cmd.Dispose(); } } #endregion } }

    Read the article

  • How can I get SQL Server transactions to use record-level locks?

    - by Joe White
    We have an application that was originally written as a desktop app, lo these many years ago. It starts a transaction whenever you open an edit screen, and commits if you click OK, or rolls back if you click Cancel. This worked okay for a desktop app, but now we're trying to move to ADO.NET and SQL Server, and the long-running transactions are problematic. I found that we'll have a problem when multiple users are all trying to edit (different subsets of) the same table at the same time. In our old database, each user's transaction would acquire record-level locks to every record they modified during their transaction; since different users were editing different records, everyone gets their own locks and everything works. But in SQL Server, as soon as one user edits a record inside a transaction, SQL Server appears to get a lock on the entire table. When a second user tries to edit a different record in the same table, the second user's app simply locks up, because the SqlConnection blocks until the first user either commits or rolls back. I'm aware that long-running transactions are bad, and I know that the best solution would be to change these screens so that they no longer keep transactions open for a long time. But since that would mean some invasive and risky changes, I also want to research whether there's a way to get this code up and running as-is, just so I know what my options are. How can I get two different users' transactions in SQL Server to lock individual records instead of the entire table? Here's a quick-and-dirty console app that illustrates the issue. I've created a database called "test1", with one table called "Values" that just has ID (int) and Value (nvarchar) columns. If you run the app, it asks for an ID to modify, starts a transaction, modifies that record, and then leaves the transaction open until you press ENTER. I want to be able to start the program and tell it to update ID 1; let it get its transaction and modify the record; start a second copy of the program and tell it to update ID 2; have it able to update (and commit) while the first app's transaction is still open. Currently it freezes at step 4, until I go back to the first copy of the app and close it or press ENTER so it commits. The call to command.ExecuteNonQuery blocks until the first connection is closed. public static void Main() { Console.Write("ID to update: "); var id = int.Parse(Console.ReadLine()); Console.WriteLine("Starting transaction"); using (var scope = new TransactionScope()) using (var connection = new SqlConnection(@"Data Source=localhost\sqlexpress;Initial Catalog=test1;Integrated Security=True")) { connection.Open(); var command = connection.CreateCommand(); command.CommandText = "UPDATE [Values] SET Value = 'Value' WHERE ID = " + id; Console.WriteLine("Updating record"); command.ExecuteNonQuery(); Console.Write("Press ENTER to end transaction: "); Console.ReadLine(); scope.Complete(); } } Here are some things I've already tried, with no change in behavior: Changing the transaction isolation level to "read uncommitted" Specifying a "WITH (ROWLOCK)" on the UPDATE statement

    Read the article

  • Do we need Record Level Locking when we already have Transaction for online ordering? (of concert ti

    - by Jian Lin
    For online ordering of concert seat or airline ticket, do we need Record Level Locking or is Transaction good enough? For concert ticket (say, seat Number 20B), or airline ticket (even with overbooking, the limit is 210, for example), I think the website cannot lock any record or begin transaction when showing the ticket purchase screen. But after the user clicks "Confirm Purchase", then the server should Begin a Transaction, Purchase Seat Number 20B, and try to Commit. If another user already bought Seat 20B in a previous transaction, then it is the "Commit" part that the current transaction will fail? So... we don't need Record Level Locking? Do Transactions always go serialized (one after another), so that's why we can know for sure there is no "race condition"? In what situation is Record Level Locking needed then?

    Read the article

  • How to Total Rows and Columns in a Word 2013 Table

    - by Lori Kaufman
    If you’re working in Word and you need to total values in a table, you can do so without having to enter the data into Excel and then copy and paste it into Word. Word can do simple calculations such as summing, multiplying, and averaging. NOTE: When you add new rows or columns of values to a table in Word, the formulas will not automatically update. To update a formula, right-click on the formula and choose Update Field from the popup menu. To enter a formula into a cell in a table, put the cursor in the cell and click the Layout tab under Table Tools.     

    Read the article

  • Count word occurrences in a text field with LINQ

    - by Yoann. B
    How can i get the occurrences count of a Word in a database text field With LINQ ? Keyword token sample : ASP.NET EDIT 4 : Database Records : Record 1 : [TextField] = "Blah blah blah ASP.NET bli bli bli ASP.NET blu ASP.NET yop yop ASP.NET" Record 2 : [TextField] = "Blah blah blah bli bli bli blu ASP.NET yop yop ASP.NET" Record 3 : [TextField] = "Blah ASP.NET blah ASP.NET blah ASP.NET bli ASP.NET bli bli ASP.NET blu ASP.NET yop yop ASP.NET" So Record 1 Contains 4 occurrence of "ASP.NET" keyword Record 2 Contains 2 occurrence of "ASP.NET" keyword Record 3 Contains 7 occurrence of "ASP.NET" keyword Collection Extraction IList < RecordModel (ordered by word count descending) Record 3 Record 1 Record 2 LinqToSQL should be the best, but LinqToObject too :) NB : No issue about the "." of ASP.NET keyword (this is not the goal if this question)

    Read the article

  • Unable to record sound from web browser (firefox / chromium ) using recordmydesktop

    - by thamurath
    I have to do some screencast tutorials and i am using recordmydesktop with gtk frontend to do it. I need to record also the sound and here is where i have found the problem. It took me some time, but now I can record the sound from almost every application in my desktop ... almost. I need to capture some sound from a web application using java, but when i load the page nothing appears in the playback tab of pavucontrol. I think this is the problem, because if there is no sound stream i think the recordmydesktop program thinks there is no sound to record ... the funny thing is that I can ear the sound in my speakers! I have tried with Firefox and Chromium with no success. Although I have been able to record youtube videos without problem, so it seems that java is the key here. Any suggestion or idea? P.S.: I am using Ubuntu 11.10 with this configuration. ( if more information is needed please let me know) sight i cannot post images ... so I have an audigy2 sound card using Analog Stereo Output profile. I have also an "Internal Audio" device, but i have it with the "Off" profile. In recordmydesktop-Advanced-Sound: Device = default

    Read the article

  • A record not resolving

    - by user1561108
    I have a hosted domain at siteground. On this domain & host I have a subdomain with a wordpress install. I wish to move this blog to another host (hostgator), while keeping the domain with siteground. To do this I created a hosting account at hostgator, got it's ip address and set the A record in siteground's cpanel accordingly: subdomain.example.com 14400 A (ip of hostgator account) Going by this online traceroute tool the records appear to have been updated (over 4 hours ago now) as it now resolves to a theplanet.com server location which hostgator use yet the subdomain is still not resolving from a web browser. The account at hostgator has been setup and is navigable via ip address/~accountname. What's going wrong here? I should add the relevant DNS record at hostgator side looks like this: subdomain.example.com 86400 IN SOA ns483.websitewelcome.com. subdomain.example.com 86400 IN NS ns1.siteground145.com. subdomain.example.com 86400 IN NS ns2.siteground145.com. subdomain.example.com 14400 IN A 74.54.176.3 I'm not sure if the hostgator record should be classed as the SOA record but I don't know enough about it to be sure. Is this the source of the problem?

    Read the article

  • find the total values get in a variable in php

    - by Hmwd
    hi, i am doing a shopping cart.i had a variable $total.when i add 1st product of price 200 i get $total=200 , when i add 2nd product of price 100 i get $total=200100 when i add 3rd product of price 400 i get $total=200100400 . i want to get $total=700. how to fix this problem? please help me........ thanks, Hmwd

    Read the article

  • The case of the phantom ADF developer (and other yarns)

    - by Chris Muir
    A few years of ADF experience means I see common mistakes made by different developers, some I regularly make myself.  This post is designed to assist beginners to Oracle JDeveloper Application Development Framework (ADF) avoid a common ADF pitfall, the case of the phantom ADF developer [add Scooby-Doo music here]. ADF Business Components - triggers, default table values and instead of views. Oracle's JDeveloper tutorials help with the A-B-Cs of ADF development, typically built on the nice 'n safe demo schema provided by with the Oracle database such as the HR demo schema. However it's not too long until ADF beginners, having built up some confidence from learning with the tutorials and vanilla demo schemas, start building ADF Business Components based upon their own existing database schema objects.  This is where unexpected problems can sneak in. The crime Developers may encounter a surprising error at runtime when editing a record they just created or updated and committed to the database, based on their own existing tables, namely the error: JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[x] ...where X is the primary key value of the row at hand.  In a production environment with multiple users this error may be legit, one of the other users has updated the row since you queried it.  Yet in a development environment this error is just plain confusing.  If developers are isolated in their own database, creating and editing records they know other users can't possibly be working with, or all the other developers have gone home for the day, how is this error possible? There are no other users?  It must be the phantom ADF developer! [insert dramatic music here] The following picture is what you'll see in the Business Component Browser, and you'll receive a similar error message via an ADF Faces page: A false conclusion What can possibly cause this issue if it isn't our phantom ADF developer?  Doesn't ADF BC implement record locking, locking database records when the row is modified in the ADF middle-tier by a user?  How can our phantom ADF developer even take out a lock if this is the case?  Maybe ADF has a bug, maybe ADF isn't implementing record locking at all?  Shouldn't we see the error "JBO-26030: Failed to lock the record, another user holds the lock" as we attempt to modify the record, why do we see JBO-25014? : Let's verify that ADF is in fact issuing the correct SQL LOCK-FOR-UPDATE statement to the database. First we need to verify ADF's locking strategy.  It is determined by the Application Module's jbo.locking.mode property.  The default (as of JDev 11.1.1.4.0 if memory serves me correct) and recommended value is optimistic, and the other valid value is pessimistic. Next we need a mechanism to check that ADF is issuing the LOCK statements to the database.  We could ask DBAs to monitor locks with OEM, but optimally we'd rather not involve overworked DBAs in this process, so instead we can use the ADF runtime setting –Djbo.debugoutput=console.  At runtime this options turns on instrumentation within the ADF BC layer, which among a lot of extra detail displayed in the log window, will show the actual SQL statement issued to the database, including the LOCK statement we're looking to confirm. Setting our locking mode to pessimistic, opening the Business Components Browser of a JSF page allowing us to edit a record, say the CHARGEABLE field within a BOOKINGS record where BOOKING_NO = 1206, upon editing the record see among others the following log entries: [421] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[422] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[423] Where binding param 1: 1206  As can be seen on line 422, in fact a LOCK-FOR-UPDATE is indeed issued to the database.  Later when we commit the record we see: [441] OracleSQLBuilder: SAVEPOINT 'BO_SP'[442] OracleSQLBuilder Executing, Lock 1 DML on: BOOKINGS (Update)[443] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[444] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[445] Update binding param 1: N[446] Where binding param 2: 1206[447] BookingsView1 notify COMMIT ... [448] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [449] EntityCache close prepared statement ....and as a result the changes are saved to the database, and the lock is released. Let's see what happens when we use the optimistic locking mode, this time to change the same BOOKINGS record CHARGEABLE column again.  As soon as we edit the record we see little activity in the logs, nothing to indicate any SQL statement, let alone a LOCK has been taken out on the row. However when we save our records by issuing a commit, the following is recorded in the logs: [509] OracleSQLBuilder: SAVEPOINT 'BO_SP'[510] OracleSQLBuilder Executing doEntitySelect on: BOOKINGS (true)[511] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[512] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[513] Where binding param 1: 1205[514] OracleSQLBuilder Executing, Lock 2 DML on: BOOKINGS (Update)[515] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[516] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[517] Update binding param 1: Y[518] Where binding param 2: 1205[519] BookingsView1 notify COMMIT ... [520] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [521] EntityCache close prepared statement Again even though we're seeing the midtier delay the LOCK statement until commit time, it is in fact occurring on line 412, and released as part of the commit issued on line 419.  Therefore with either optimistic or pessimistic locking a lock is indeed issued. Our conclusion at this point must be, unless there's the unlikely cause the LOCK statement is never really hitting the database, or the even less likely cause the database has a bug, then ADF does in fact take out a lock on the record before allowing the current user to update it.  So there's no way our phantom ADF developer could even modify the record if he tried without at least someone receiving a lock error. Hmm, we can only conclude the locking mode is a red herring and not the true cause of our problem.  Who is the phantom? At this point we'll need to conclude that the error message "JBO-25014: Another user has changed" is somehow legit, even though we don't understand yet what's causing it. This leads onto two further questions, how does ADF know another user has changed the row, and what's been changed anyway? To answer the first question, how does ADF know another user has changed the row, the Fusion Guide's section 4.10.11 How to Protect Against Losing Simultaneous Updated Data , that details the Entity Object Change-Indicator property, gives us the answer: At runtime the framework provides automatic "lost update" detection for entity objects to ensure that a user cannot unknowingly modify data that another user has updated and committed in the meantime. Typically, this check is performed by comparing the original values of each persistent entity attribute against the corresponding current column values in the database at the time the underlying row is locked. Before updating a row, the entity object verifies that the row to be updated is still consistent with the current state of the database.  The guide further suggests to make this solution more efficient: You can make the lost update detection more efficient by identifying any attributes of your entity whose values you know will be updated whenever the entity is modified. Typical candidates include a version number column or an updated date column in the row.....To detect whether the row has been modified since the user queried it in the most efficient way, select the Change Indicator option to compare only the change-indicator attribute values. We now know that ADF BC doesn't use the locking mechanism at all to protect the current user against updates, but rather it keeps a copy of the original record fetched, separate to the user changed version of the record, and it compares the original record against the one in the database when the lock is taken out.  If values don't match, be it the default compare-all-columns behaviour, or the more efficient Change Indicator mechanism, ADF BC will throw the JBO-25014 error. This leaves one last question.  Now we know the mechanism under which ADF identifies a changed row, what we don't know is what's changed and who changed it? The real culprit What's changed?  We know the record in the mid-tier has been changed by the user, however ADF doesn't use the changed record in the mid-tier to compare to the database record, but rather a copy of the original record before it was changed.  This leaves us to conclude the database record has changed, but how and by who? There are three potential causes: Database triggers The database trigger among other uses, can be configured to fire PLSQL code on a database table insert, update or delete.  In particular in an insert or update the trigger can override the value assigned to a particular column.  The trigger execution is actioned by the database on behalf of the user initiating the insert or update action. Why this causes the issue specific to our ADF use, is when we insert or update a record in the database via ADF, ADF keeps a copy of the record written to the database.  However the cached record is instantly out of date as the database triggers have modified the record that was actually written to the database.  Thus when we update the record we just inserted or updated for a second time to the database, ADF compares its original copy of the record to that in the database, and it detects the record has been changed – giving us JBO-25014. This is probably the most common cause of this problem. Default values A second reason this issue can occur is another database feature, default column values.  When creating a database table the schema designer can define default values for specific columns.  For example a CREATED_BY column could be set to SYSDATE, or a flag column to Y or N.  Default values are only used by the database when a user inserts a new record and the specific column is assigned NULL.  The database in this case will overwrite the column with the default value. As per the database trigger section, it then becomes apparent why ADF chokes on this feature, though it can only specifically occur in an insert-commit-update-commit scenario, not the update-commit-update-commit scenario. Instead of trigger views I must admit I haven't double checked this scenario but it seems plausible, that of the Oracle database's instead of trigger view (sometimes referred to as instead of views).  A view in the database is based on a query, and dependent on the queries complexity, may support insert, update and delete functionality to a limited degree.  In order to support fully insertable, updateable and deletable views, Oracle introduced the instead of view, that gives the view designer the ability to not only define the view query, but a set of programmatic PLSQL triggers where the developer can define their own logic for inserts, updates and deletes. While this provides the database programmer a very powerful feature, it can cause issues for our ADF application.  On inserting or updating a record in the instead of view, the record and it's data that goes in is not necessarily the data that comes out when ADF compares the records, as the view developer has the option to practically do anything with the incoming data, including throwing it away or pushing it to tables which aren't used by the view underlying query for fetching the data. Readers are at this point reminded that this article is specifically about how the JBO-25014 error occurs in the context of 1 developer on an isolated database.  The article is not considering how the error occurs in a production environment where there are multiple users who can cause this error in a legitimate fashion.  Assuming none of the above features are the cause of the problem, and optimistic locking is turned on (this error is not possible if pessimistic locking is the default mode *and* none of the previous causes are possible), JBO-25014 is quite feasible in a production ADF application if 2 users modify the same record. At this point under project timelines pressure, the obvious fix for developers is to drop both database triggers and default values from the underlying tables.  However we must be careful that these legacy constructs aren't used and assumed to be in place by other legacy systems.  Dropping the database triggers or default value that the existing Oracle Forms  applications assumes and requires to be in place could cause unexpected behaviour and bugs in the Forms application.  Proficient software engineers would recognize such a change may require a partial or full regression test of the existing legacy system, a potentially costly and timely exercise, not ideal. Solving the mystery once and for all Luckily ADF has built in functionality to deal with this issue, though it's not a surprise, as Oracle as the author of ADF also built the database, and are fully aware of the Oracle database's feature set.  At the Entity Object attribute level, the Refresh After Insert and Refresh After Update properties.  Simply selecting these instructs ADF BC after inserting or updating a record to the database, to expect the database to modify the said attributes, and read a copy of the changed attributes back into its cached mid-tier record.  Thus next time the developer modifies the current record, the comparison between the mid-tier record and the database record match, and JBO-25014: Another user has changed" is no longer an issue. [Post edit - as per the comment from Oracle's Steven Davelaar below, as he correctly points out the above solution will not work for instead-of-triggers views as it relies on SQL RETURNING clause which is incompatible with this type of view] Alternatively you can set the Change Indicator on one of the attributes.  This will work as long as the relating column for the attribute in the database itself isn't inadvertently updated.  In turn you're possibly just masking the issue rather than solving it, because if another developer turns the Change Indicator back on the original issue will return.

    Read the article

  • Semaphore - What is the use of initial count?

    - by Sandbox
    http://msdn.microsoft.com/en-us/library/system.threading.semaphoreslim.aspx To create a semaphore, I need to provide an initial count and maximum count. MSDN states that an initial count is - The initial number of requests for the semaphore that can be granted concurrently. While it states that maximum count is The maximum number of requests for the semaphore that can be granted concurrently. I can understand that the maximum count is the maximum number of threads that can access a resource concurrently. But, what is the use of initial count? If I create a semaphore with an initial count of 0 and a maximum count of 2, none of my threadpool threads are able to access the resource. If I set the initial count as 1 and maximum count as 2 then only thread pool thread can access the resource. It is only when I set both initial count and maximum count as 2, 2 threads are able to access the resource concurrently. So, I am really confused about the significance of initial count? SemaphoreSlim semaphoreSlim = new SemaphoreSlim(0, 2); //all threadpool threads wait SemaphoreSlim semaphoreSlim = new SemaphoreSlim(1, 2);//only one thread has access to the resource at a time SemaphoreSlim semaphoreSlim = new SemaphoreSlim(2, 2);//two threadpool threads can access the resource concurrently

    Read the article

  • how to add a REverse PTR Record on Amazon Route 53?

    - by Oscar Cabrero
    if i have the below ip 168.144.254.X and i would like to add a ptr record in amazon in the form of X.254.144.168.in-addr.arpa what should be in the name field and what should be in the value field i have a zone created with a name like mydomain.com which host the DNS records for my ip. amazon wont let me add a value of X.254.144.168.in-addr.arpa in the name field do i need to create a new zone for the ip in order to allow this?

    Read the article

  • Gain Total Control of Systems running Oracle Linux

    - by Anand Akela
    Oracle Linux is the best Linux for enterprise computing needs and Oracle Enterprise Manager enables enterprises to gain total control over systems running Oracle Linux. Linux Management functionality is available as part of Oracle Enterprise Manager 12c and is available to Oracle Linux Basic and Premier Support customers at no cost. The solution provides an integrated and cost-effective solution for complete Linux systems lifecycle management and delivers comprehensive provisioning, patching, monitoring, and administration capabilities via a single, web-based user interface thus significantly reducing the complexity and cost associated with managing Linux operating system environments. Many enterprises are transforming their IT infrastructure from multiple independent datacenters to an Infrastructure-as-a-Service (IaaS) model, in which shared pools of compute and storage are made available to end-users on a self-service basis. While providing significant improvements when implemented properly, this strategy introduces change and complexity at a time when datacenters are already understaffed and overburdened. To aid in this transformation, IT managers need the proper tools to help them provide the array of IT capabilities required throughout the organization without stretching their staff and budget to the limit. Oracle Enterprise Manager 12c offers  the advanced capabilities to enable IT departments and end-users to take advantage of many benefits and cost savings of IaaS. Oracle Enterprise Manager Ops Center 12c addresses this challenge with a converged approach that integrates systems management across the infrastructure stack, helping organizations to streamline operations, increase productivity, and reduce system downtime.  You can see the Linux management functionality in action by watching the latest integrated Linux management demo . Stay Connected with Oracle Enterprise Manager: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Hablamos de Formación en la Conferencia Total Training en Lisboa

    - by Julio Rodriguez
    El pasado 22 de Octubre se celebro en Lisboa la 2ª Edición del Total Training Conference, la feria de la formación en la empresa de Portugal. Este año contó con la participación de cerca de 90 profesionales de los Recursos Humanos y se trataron los temas de mayor interés para los departamentos de formación de las empresas. ¿De qué se habló en la conferencia? Los ejes principales de las ponencias estuvieron centrados en la combinación de formación presencial y elearning, para sacar el máximo provecho de los presupuestos de formación y utilizar el medio adecuado para impactar de la manera más efectiva en los alumnos, ya estemos tratando de inculcar valores o comportamientos o simplemente transmitiendo una serie de conocimientos que tienen que ser utilizados en el día a día. Tambien se trato de los aspectos de aprendizajes informales, también conocidos como social learning, o cómo podemos transformar a nuestros empleados en profesores de sus compañeros resolviendo necesidades puntuales, pero a veces críticas. En la última ponencia se explicó y trató el novedoso tema de la gamificación o cómo introducir en la generación de contenidos formativos elementos lúdicos del juego para aumentar la efectividad de los mismos, concretando las definiciones y aclarando que formación no es una gamificación aunque por su similitud pueda confundirse. ¿Qué aportó Oracle a esta jornada? Desde Oracle quisimos aportar nuestro granito de arena, explicando los beneficios de una solución de formación en la nube, dónde, desde el punto de vista de funcionalidad no se renuncia a nada que pueda ofrecer una solución onpremise, con una serie de ventajas adicionales, como son: Pagar por uso y no hay que hacer desembolsos iniciales importantes como el pago de licencias o la compra de servidores o la contratación o asignación de personal informático para mantener un sistema en la casa. Así mismo al ser mantenida por el propio fabricante, el sistema es actualizado de forma constante, de forma transparente para el propio cliente sin los quebraderos de cabeza típicos de esos proyectos de implantación. Aprovechamos la ocasión para comentar algunos de los proyectos transformacionales de algunos de nuestros clientes para ilustrar como un sistema de formación puede por ejemplo convertir a un departamento que es un centro de coste en un centro de beneficio, o cómo un despliegue adecuado de la formación puede reducir la rotación, aumentar la satisfacción de los empleados y ayudar en el despliegue de nuevos productos. En definitiva una jornada de lo más provechosa si eres un profesional de la formación en Portugal Si quieres conocer las soluciones de elearning de Oracle las encontrarás aquí

    Read the article

  • Recommended total system backup solution

    - by bioShark
    I hope this question won't get closed immediately since it has a generic title. I already searched a bit around the answers here, but nothing satisfied me. I want a back-up solution that makes a total back-up, so that I can restore my Ubuntu in case of major failures, like HDD failing. As far as I can see, I have 2 choices: 1) Backing up with Deja Dup to an external disk. This is fine and I am already doing, but in case my HDD fails, and I make a new Ubuntu install on a new disk, will Deja Dup be able to restore all my setting and stuff from the backed up files? If it can, then what other files/folders should I add in Deja Dup to back-up (currently I have set only the recommended /home folder)? Is there a point in telling Deja Dup to back-up everything under "/" ? 2) A disk/partition cloning software. This would be something similar to Noton ghost. Is there such a software with nice GUI that you could recommend for Ubuntu? And even better, it would be nice if Ubuntu's liveCD could recognize such a clone at install step. I am using 11.10

    Read the article

  • Getting the total number of processors a computer has (c#)

    - by mbcrump
    Here is a code snippet for getting the total number of processors a computer has without using Environment.ProcessorCount. I found out that Environment.ProcessorCount is not necessary returning the correct value on some Intel based CPU’s.   using System; usingSystem.Collections.Generic; usingSystem.Linq; usingSystem.Text; usingSystem.Globalization; usingSystem.Runtime.InteropServices; namespaceConsoleApplication4 {     classProgram    {         static voidMain(string[] args)         {             int c = ProcessorCount;             Console.WriteLine("The computer has {0} processors", c);             Console.ReadLine();         }         private static classNativeMethods        {             [StructLayout(LayoutKind.Sequential)]             internal struct SYSTEM_INFO            {                 public ushort wProcessorArchitecture;                 public ushort wReserved;                 public uint dwPageSize;                 publicIntPtr lpMinimumApplicationAddress;                 publicIntPtr lpMaximumApplicationAddress;                 publicUIntPtr dwActiveProcessorMask;                 public uint dwNumberOfProcessors;                 public uint dwProcessorType;                 public uint dwAllocationGranularity;                 public ushort wProcessorLevel;                 public ushort wProcessorRevision;             }             [DllImport("kernel32.dll", CharSet = CharSet.Auto, ExactSpelling = true)]             internal static extern voidGetNativeSystemInfo(refSYSTEM_INFOlpSystemInfo);         }         public static int ProcessorCount         {             get            {                 NativeMethods.SYSTEM_INFOlpSystemInfo = newNativeMethods.SYSTEM_INFO();                 NativeMethods.GetNativeSystemInfo(reflpSystemInfo);                 return(int)lpSystemInfo.dwNumberOfProcessors;             }         }     } }

    Read the article

  • Record Screen Activity with CamStudio

    - by Asian Angel
    Sometimes a visual demonstration works much better than a list of instructions. If you need to make a demo video for family and/or friends then you might want to have a look at CamStudio. Using CamStudio To get properly set up you will need to install two different files (the main program followed by the codec). Once that is done you are ready to get started. When you start the program you will see a surprisingly small window. Notice the highlighted Record to text…it serves as a visual indicator for the video type selected for recording. Before you start creating a video it would be a good idea to look through some of the settings. The first one to look at is the region or area that you want to record. Next you will want to look through the video options since these will affect the quality and final size of your video files. The default setting for quality is 70…adjust that to the level that best suits your needs. Note: For our example we maxed out the various video settings for best quality. On our system Microsoft Video 1 was listed as the default compressor but as you can see there were other options available. You can configure the settings for the compressor you want to use if desired. Keep in mind that each compressor will have unique settings of their own, so if you change it, be certain to go back and check. We decided to use the CamStudio Lossless Codec for our example (it gave the best results while trying the software). Going back to the main window you can toggle back and forth between .avi and .swf output using the last button. Once you are satisfied with the settings click on the red record button to start. If you need to pause while recording or stop recording click on the system tray icon and select the appropriate command. When you are finished recording, you will be presented with the save file window. Browse for the desired save location and name your new file. Once you have saved the file the movie player window will automatically open so that you view your new video. Our sample video shown here is at 50% of original size so may look slightly “gritty”. The detail was much better at 100%. If you decide to record and save as .swf the process will be identical to recording in .avi format until the movie player window opens. At that time the conversion process from .avi to .swf will begin. When complete you will have a new flash video and html file that goes with it. Depending on which browser you have set as default, you may run into a small problem when the preview for your new .swf file tries to open. There is a small bug in the generated html file. You can use this work-around or… Just open the .swf file directly in your favorite browser. Conclusion CamStudio may not produce the highest quality videos, but it’s free and does a very nice job nonetheless. If you are working on a tight budget or only need to make an occasional video then CamStudio is a very sensible choice. Links Download CamStudio Stable Version & CamStudio Codec *Download links are approximately half-way down the page. Download CamStudio Stable Version & CamStudio Codec at SourceForge *Beta version also available here. Similar Articles Productive Geek Tips Get the Classic Style Network Activity Indicator Back in Windows 7How To Copy a DVD with VLC 1.0ALLCapture 3.0 [Review]Listen and Record Over 12,000 Online Radio Stations with RadioSureGeek Reviews: Play And Record Internet Radio With Screamer Radio TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox Filevo is a Cool File Hosting & Sharing Site Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate

    Read the article

  • Use Google Analytics to record newsletter clicks to an external website

    - by rlsaj
    Note: by external website I mean a website that we do not have access to the code. For example www.facebook.com I want to record how many social share clicks we have from our customer newsletters. For example, when a customer receives a newsletter they can click "Share this on Facebook" which shares the hosted version of the newsletter. If I wanted to record these newsletter clicks to our website I understand we'd use Google URL Builder (https://support.google.com/analytics/answer/1033867?hl=en) to create a UTM URL but because we're linking to an external site, how do we record this?

    Read the article

  • I want to record a screencast of a processing sketch

    - by nathanvda
    I have a created a music visualisation using Processing. I now want to convert that to a video, and the least obtrusive way I could think of is to record a screencast. I figured exporting Processing to video including audio, from within Processing itself, on ubuntu seemed an unsolved issue. Very hard and also could cause timing sync issues (since the music keeps running while images are captured). So move on to the screencast method. Dead-easy, I figured. But I was wrong. First hurdle was to find a way to record the sound from the audio (and not the mic). I did find a tutorial for that here. In short: use gtk-recordmydesktop and pulse audio. But, apparently, what happens: Processing does not use ALSA. When the sound is playing, it does not appear in the Pulse Audio mixer. How can I record the audio now?

    Read the article

  • MS ACCESS: How can i count distinct value using access query?

    - by Sadat
    here is the current complex query given below. SELECT DISTINCT Evaluation.ETCode, Training.TTitle, Training.Tcomponent, Training.TImpliment_Partner, Training.TVenue, Training.TStartDate, Training.TEndDate, Evaluation.EDate, Answer.QCode, Answer.Answer, Count(Answer.Answer) AS [Count], Questions.SL, Questions.Question FROM ((Evaluation INNER JOIN Training ON Evaluation.ETCode=Training.TCode) INNER JOIN Answer ON Evaluation.ECode=Answer.ECode) INNER JOIN Questions ON Answer.QCode=Questions.QCode GROUP BY Evaluation.ETCode, Answer.QCode, Training.TTitle, Training.Tcomponent, Training.TImpliment_Partner, Training.Tvenue, Answer.Answer, Questions.Question, Training.TStartDate, Training.TEndDate, Evaluation.EDate, Questions.SL ORDER BY Answer.QCode, Answer.Answer; There is an another column Training.TCode. I need to count distinct Training.TCode, can anybody help me? If you need more information please let me know

    Read the article

  • PHP - not returning a count number for filled array...

    - by Phil Jackson
    Morning, this is eating me alive so Im hoping it's not something stupid, lol. $arrg = array(); if( str_word_count( $str ) > 1 ) { $input_arr = explode(' ', $str); die(print_r($input_arr)); $count = count($input_arr); die($count); above is part of a function. when i run i get; > Array ( > [0] => luke > [1] => snowden > [2] => create > [3] => develop > [4] => web > [5] => applications > [6] => sites > [7] => alse > [8] => dab > [9] => hand > [10] => design > [11] => love > [12] => helping > [13] => business > [14] => thrive > [15] => latest > [16] => industry > [17] => developer > [18] => act > [19] => designs > [20] => php > [21] => mysql > [22] => jquery > [23] => ajax > [24] => xhtml > [25] => css > [26] => de > [27] => montfont > [28] => award > [29] => advanced > [30] => programming > [31] => taught > [32] => development > [33] => years > [34] => experience > [35] => topic > [36] => fully > [37] => qualified > [38] => electrician > [39] => city > [40] => amp > [41] => guilds > [42] => level ) Which im expecting; run this however and nothing is returned!?!?! $arrg = array(); if( str_word_count( $str ) > 1 ) { $input_arr = explode(' ', $str); //die(print_r($input_arr)); $count = count($input_arr); die($count); can anyone see anything that my eyes cant?? regards, Phil

    Read the article

  • Postgresql count+sort performance

    - by invictus
    I have built a small inventory system using postgresql and psycopg2. Everything works great, except, when I want to create aggregated summaries/reports of the content, I get really bad performance due to count()'ing and sorting. The DB schema is as follows: CREATE TABLE hosts ( id SERIAL PRIMARY KEY, name VARCHAR(255) ); CREATE TABLE items ( id SERIAL PRIMARY KEY, description TEXT ); CREATE TABLE host_item ( id SERIAL PRIMARY KEY, host INTEGER REFERENCES hosts(id) ON DELETE CASCADE ON UPDATE CASCADE, item INTEGER REFERENCES items(id) ON DELETE CASCADE ON UPDATE CASCADE ); There are some other fields as well, but those are not relevant. I want to extract 2 different reports: - List of all hosts with the number of items per, ordered from highest to lowest count - List of all items with the number of hosts per, ordered from highest to lowest count I have used 2 queries for the purpose: Items with host count: SELECT i.id, i.description, COUNT(hi.id) AS count FROM items AS i LEFT JOIN host_item AS hi ON (i.id=hi.item) GROUP BY i.id ORDER BY count DESC LIMIT 10; Hosts with item count: SELECT h.id, h.name, COUNT(hi.id) AS count FROM hosts AS h LEFT JOIN host_item AS hi ON (h.id=hi.host) GROUP BY h.id ORDER BY count DESC LIMIT 10; Problem is: the queries runs for 5-6 seconds before returning any data. As this is a web based application, 6 seconds are just not acceptable. The database is heavily populated with approximately 50k hosts, 1000 items and 400 000 host/items relations, and will likely increase significantly when (or perhaps if) the application will be used. After playing around, I found that by removing the "ORDER BY count DESC" part, both queries would execute instantly without any delay whatsoever (less than 20ms to finish the queries). Is there any way I can optimize these queries so that I can get the result sorted without the delay? I was trying different indexes, but seeing as the count is computed it is possible to utilize an index for this. I have read that count()'ing in postgresql is slow, but its the sorting that are causing me problems... My current workaround is to run the queries above as an hourly job, putting the result into a new table with an index on the count column for quick lookup. I use Postgresql 9.2.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >