Search Results

Search found 19499 results on 780 pages for 'transaction log'.

Page 192/780 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • Generating SSH Keys on the Server

    - by mupro
    I have set up sshd on a Linux server and managed to log in via keys generated using ssh-keygen. However, I have made the following observation: When I generate the key pair on the client and copy the public key to the server everythings works fine. But when I generate the key pair on the server and copy the private key to the client I cannot log in. Can anybody explain to me if and why the keys have to be created on the client?

    Read the article

  • Grep-ing gzipped files [duplicate]

    - by Julien Genestoux
    This question already has an answer here: Grepping through .gz log files 5 answers I have a set of 100 log files, compressed using gzip. I need to find all lines matching a given expression. I'd use grep, but of course, that's a bit of a nightmare because I'll have to unzip all files, one by one, grep them and delete the unzipped version, because they wouldn't all fit on my sevrer if they were all unzipped. Anyone has a little trick on how to get that done quickly?

    Read the article

  • MySQL Non Index Queries Analysis

    - by Markii
    I'm using the log queries not using index but it logs all that use indexes but just more advanced or using IFs. Is there a parser or a program out there that can analyze the log and give me a literal output of saying "table.column should be a index" Thanks

    Read the article

  • SQL Server 2000 Enterprise Edition - Login Fail for User (computer Name)/Guest

    - by Nazmuzzaman Manna
    Hi: Subject: SQL Server 2000 Enterprise Edition, windows XP Log-in fail for odbc sql server user computer name/guest I am connecting for several computer with the VB6 application with following connection string "PROVIDER=MSDASQL;driver={SQL Server};server=Computer Name.;uid=;pwd=;database=Test Name;" BUT, just Three Computer not Log-in. This three (connected with each other) are completely separate in other Room. I checked all possible options. Besides, which option is missing??? Please Help me...

    Read the article

  • Nginx reverse proxy error page

    - by Lormayna
    I'm using nginx as reverse proxy for a single machine. I would like to have an error page when the backend machine goes down. This is my configuration file: server { listen 80; access_log /var/log/nginx/access.log; root /var/www/nginx; error_page 403 404 500 502 503 504 /error.html; location / { proxy_pass http://192.168.1.78/; include /etc/nginx/proxy.conf; }

    Read the article

  • Oracle error when logging into database

    - by Bryan
    When I try to log into my db with a specific user I get this message. Below is from the alert log. I can login as system just fine. Anyone know how to figure out what is causing this? Thanks in advance for the help. ----- Error Stack Dump ----- ORA-00604: error occurred at recursive SQL level 1 ORA-01438: value larger than specified precision allowed for this column ORA-06512: at line 2 Oracle 10g OEL 5.5

    Read the article

  • Website defaced, what can I do?

    - by SteD
    My company's website has been defaced, provided I have the apache raw access log, is there anything I could do to analyze when and what went wrong? I mean what to look out for among all those thousands and thousands line of log? Thanks for the help

    Read the article

  • Website defaced, what can I do?

    - by SteD
    My company's website has been defaced, provided I have the apache raw access log, is there anything I could do to analyze when and what went wrong? I mean what to look out for among all those thousands and thousands line of log? Thanks for the help

    Read the article

  • zipping a file used by a process

    - by jaganath
    i accidently zipped a log file of process (the process wasnt writing in it though, it writes it only during weekends when the process get killed).I unzipped the file immediately back. will it affect the process when it is trying to write in the log file?

    Read the article

  • Windows Identity Foundation: How to get new security token in ASP.net

    - by Rising Star
    I'm writing an ASP.net application that uses Windows Identity Foundation. My ASP.net application uses claims-based authentication with passive redirection to a security token service. This means that when a user accesses the application, they are automatically redirected to the Security Token Service where they receive a security token which identifies them to the application. In ASP.net, security tokens are stored as cookies. I want to have something the user can click on in my application that will delete the cookie and redirect them to the Security Token Service to get a new token. In short, make it easy to log out and log in as another user. I try to delete the token-containing cookie in code, but it persists somehow. How do I remove the token so that the user can log in again and get a new token?

    Read the article

  • Python and Ruby in Oracle Tuxedo

    - by christopher.jones
    Did you know you can now develop services and applications in Python or Ruby with Oracle Tuxedo? The Tuxedo team have a blog post about it at Python and Ruby in Tuxedo. I used to think of Tuxedo as a Transaction Processing Monitor but it has evolved into much more.

    Read the article

  • Translate Basic Config.groovy log4j DSL to external log4j.properties

    - by Stephen Swensen
    The following is a basic log4j configuration inside Config.groovy using the log4j DSL with Grails 1.2, it works as expected (log all errors to the given file): log4j = { appenders { file name:'file', file:"c:/error.log" } error 'grails.app' root { error 'file' } } How would one translate this into a properties style log4j configuration file? The following does not work: log4j.rootLogger=ERROR, FA log4j.appender.FA=org.apache.log4j.FileAppender log4j.appender.FA.File=C:/error.log log4j.logger.grails.app=ERROR, FA I suspect it has something to do with the translation of error 'grails.app' but I really don't know. If it makes any difference, the properties file is configured externally: grails.config.locations = ["file:${basedir}/extconf/log4j.properties"] All I really want is an external log4j properties file which logs all application exceptions to a file.

    Read the article

  • fitbounds() in Google maps api V3 does not fit bounds

    - by jul
    hi, I'm using the geocoder from Google API v3 to display a map of a country. I get the recommended viewport for the country but when I want to fit the map to this viewport, it does not work (see the bounds before and after calling the fitBounds function in the code below). What am I doing wrong? How can I set the viewport of my map to results[0].geometry.viewport? thanks jul var geocoder = new google.maps.Geocoder(); if(geocoder) { geocoder.geocode({'address': '{{countrycode}}'}, function(results, status) { var bounds = new google.maps.LatLngBounds(); bounds = results[0].geometry.viewport; console.log(bounds); //((35.173, -12.524), (45.244, 5.098)) console.log(map.getBounds()); //((34.628757195038844, -14.683750000000012), (58.28355197864712, 27.503749999999986)) map.fitBounds(bounds); console.log(map.getBounds()); //((25.740113512090183, -24.806750000000005), (52.44233664523719, 17.380749999999974)) }); } }

    Read the article

  • ADNOC talks about 50x increase in performance

    - by KLaker
    If you are still wondering about how Exadata can revolutionise your business then I would recommend watching this great video which was recorded at this year's OpenWorld. First a little background...The Abu Dhabi National Oil Company for Distribution (ADNOC) is an integrated energy company that was founded in 1973. ADNOC Distribution markets and distributes petroleum products and services within the United Arab Emirates and internationally. As one of the largest and most innovative government-owned petroleum companies in the Arab Gulf, ADNOC Distribution is renowned and respected for the exceptional quality and reliability of its products and services. Its five corporate divisions include more than 200 filling stations (a number that is growing at 8% annually), more than 150 convenience stores, 10 vehicle inspection stations, as well as wholesale and retail sales of bulk fuel, gas, oil, diesel, and lubricants. ADNOC selected Oracle Exadata Database Machine after extensive research because it provided them with a single platform that can run mixed workloads in a single unified machine: "We chose Oracle Exadata Database Machine because it.offered a fully integrated and highly engineered system that was ready to deploy. With our infrastructure running all the same technology, we can operate any type of Oracle Database without restrictions and be prepared for business growth," said Ali Abdul Aziz Al-Ali, IT division manager, ADNOC Distribution. ".....we could consolidate our transaction processing and business intelligence onto one platform. Competing solutions are just not capable of doing that." - Awad Ahmed Ali El-Sidiq, Senior Database Administrator, ADNOC Distribution In this new video Awad Ahmen Ali El Sidddig, Senior DBA at ADNOC, talks about the impact that Exadata has had on his team and the whole business. ADNOC is using our engineered systems to drive and manage all their workloads: from transaction systems to payments system to data warehouse to BI environment. A true Disk-to-Dashboard revolution using Engineered Systems. This engineered approach is delivering 50x improvement in performance with one queries running 100x faster! The IT has even revolutionised some of their data warehouse related processes with the help of Exadata and now jobs that were taking over 4 hours now run in a few minutes.  To watch the video click on the image below which will take you to our Oracle YouTube page: (if the above link does not work, click here: http://www.youtube.com/watch?v=zcRpxc6u5Ic) Now that queries are running 100x faster and jobs are completing in minutes not hours, what is next for the IT team at ADNOC? Like many of our customers ADNOC is now looking to take advantage of big data to help them better align their business operations with customer behaviour and customer insights. To help deliver this next level of insight the IT team is looking at the new features in Oracle Database 12c such as the new in-memory feature to deliver even more performance gains.  The great news is that Awad Ahmen Ali El Sidddig was awarded DBA of the Year - EMEA within our Data Warehouse Global Leaders programme and you can see the badge for this award pop-up at the start of video. Well done to everyone at ADNOC and thanks for spending the time with us at OOW to create this great video.

    Read the article

  • SmtpClient.SendAsync code review

    - by Lieven Cardoen
    I don't know if this is the way it should be done: { ... var client = new SmtpClient {Host = _smtpServer}; client.SendCompleted += SendCompletedCallback; var userState = mailMessage; client.SendAsync(mailMessage, userState); ... } private static void SendCompletedCallback(object sender, AsyncCompletedEventArgs e) { // Get the unique identifier for this asynchronous operation. var mailMessage= (MailMessage)e.UserState; if (e.Cancelled) { Log.Info(String.Format("[{0}] Send canceled.", mailMessage)); } if (e.Error != null) { Log.Error(String.Format("[{0}] {1}", mailMessage, e.Error)); } else { Log.Info("Message sent."); } mailMessage.Dispose(); } Disposing the mailMessage after the client.SendAsync(...) throws an error. So I need to dispose it in the Callback handler.

    Read the article

  • Symfony2: automatically logging in users from their Windows session

    - by Paul Maclean
    In Symfony2 I have built an intranet. It currently uses the FOSUserBundle and an LDAP bundle to log users in, and I would like to add the functionality to log in user from their session in Windows. I found an NTLM script for PHP and an updated version of it, but I haven't been able to incorporate them into Symfony2. I also found an NTLM bundle for Symfony2, but it was written for an older version of Symfony and it is not maintained anymore. I was unable to rewrite it and get it to work. My question is; how could I automatically log in users from their Windows session in my Symfony2-app, in addition to the already present LDAP functionality? What would be the best and easiest way?

    Read the article

  • Not getting a response when using Android Async HTTP (from loopj)

    - by conor
    I am using the Async Http library from loopj.com and also the sample code from the site. The problem is that when the request is made I don't get a response. I have even overridden the onFinish() function which isn't getting fire either. I am using the sample code from their site which is as follows: import com.loopj.android.http.AsyncHttpClient; import com.loopj.android.http.AsyncHttpResponseHandler; Log.v("bopzy_debug", "Testing HTTP Connectivity"); System.out.println("123"); AsyncHttpClient client = new AsyncHttpClient(); client.get("http://www.google.com", new AsyncHttpResponseHandler() { @Override public void onSuccess(String response) { Log.v("bopzy_debug", response); } @Override public void onFinish() { Log.v("bopzy_debug", "Finished.."); } }); Any ideas on how to solve would be greatly appreciated, not really sure what is going on here.

    Read the article

  • Transactions in LINQ to SQL applications

    - by nikolaosk
    In this post I would like to talk about LINQ to SQL and transactions.When I have a LINQ to SQL class I always get asked this question, "How does LINQ treat Transactions?". When we use the DeleteOnSubmit() method or the InsertOnSubmit() method, all of those commands at some point are translated into T-SQL commands and then are executed against the database. All of those commands live in transactions and they follow the basic rules of transaction processing. They do succeed together or fail together...(read more)

    Read the article

  • UnboundLocalError: local variable 'rows' referenced before assignment

    - by patrick
    i'm trying to make a database connection by an other script. But the script didn't work propperly. and if I do a 'print' on the rows then I get the value 'null' But if I use a 'select * from incidents' query then i get the result from the table incidents. import database rows = database.database("INSERT INTO incidents VALUES(3 ,'test_title1', 'test', TO_DATE('25-07-2012', 'DD-MM-YYYY'), CURRENT_TIMESTAMP, 'sector', 50, 60)") #print database.database() print rows database.py script: import psycopg2 import sys import logfile def database(query): logfile.log(20, 'database.py', 'Executing...') con = None try: con = psycopg2.connect(database='incidents', user='ipfit5', password='tester') cur = con.cursor() #print query cur.execute(query) rows = cur.fetchall() con.commit() #test row does work #cur.execute("INSERT INTO incidents VALUES(3 ,'test_titel1', 'test', TO_DATE('25-07-2012', 'DD-MM-YYYY'), CURRENT_TIMESTAMP, 'sector', 50, 60)") except: logfile.log(40, 'database.py', 'Er is iets mis gegaan') logfile.log(40, 'database.py', str(sys.exc_info())) finally: if con: con.close() return rows

    Read the article

  • TimeoutException in simultaneous calls to WCF services from Silverlight application

    - by Alexander K.
    Analysing log files I've noticed that ~1% of service calls ended with TimeoutException on the Silverlight client side. The services (wcf) are quite simple and do not perform long computations. According the log all calls to the services are always processed in less that 1 sec (even when TimeoutException is occurred on the client!), so it is not server timeout. So what is wrong? Can it be configuration or network problem? How can I avoid it? What additional logging information can be helpful for localizing this issue? The only one workaround I've thought up is to retry service calls after timeout. I will appreciate any help on this issue! Update: On startup the application performs 17 service calls and 12 of them simultaneously (may it be cause of failure?). Update: WCF log has not contained useful information about this issue. It seems some service calls do not reach the server side.

    Read the article

  • SQL Server: Database stuck in "Restoring" state

    - by Ian Boyd
    i backed up a data: BACKUP DATABASE MyDatabase TO DISK = 'MyDatabase.bak' WITH INIT --overwrite existing And then tried to restore it: RESTORE DATABASE MyDatabase FROM DISK = 'MyDatabase.bak' WITH REPLACE --force restore over specified database And now the database is stuck in the restoring state. Some people have theorized that it's because there was no log file in the backup, and it needed to be rolled forward using: RESTORE DATABASE MyDatabase WITH RECOVERY Except that, of course, fails: Msg 4333, Level 16, State 1, Line 1 The database cannot be recovered because the log was not restored. Msg 3013, Level 16, State 1, Line 1 RESTORE DATABASE is terminating abnormally. And exactly what you want in a catastrophic situation is a restore that won't work. The backup contains both a data and log file: RESTORE FILELISTONLY FROM DISK = 'MyDatabase.bak' Logical Name PhysicalName ============= =============== MyDatabase C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\MyDatabase.mdf MyDatabase_log C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\MyDatabase_log.LDF

    Read the article

  • Four Emerging Payment Stories

    - by David Dorf
    The world of alternate payments has been moving fast of late.  Innovation in this area will help both consumers and retailers, but probably hurt the banks (at least that's the plan).  Here are four recent news items in this area: Dwolla, a start-up in Iowa, is trying to make credit cards obsolete.  Twelve guys in Des Moines are using $1.3M they raised to allow businesses to skip the credit card networks and avoid the fees.  Today they move about $1M a day across their network with an average transaction size of $500. Instead of charging merchants 2.9% plus $.30 per transaction, Dwolla charges a quarter -- yep, that coin featuring George Washington. Dwolla (Web + Dollar = Dwolla) avoids the credit networks and connects directly to bank accounts using the bank's ACH network.  They are signing up banks and merchants targeting both B2B and C2B as well as P2P payments.  They leverage social networks to notify people they have a money transfer, and also have a mobile app that uses GPS location. However, all is not rosy.  There have been complaints about unexpected chargebacks and with debit fees being reduced by the big banks, the need is not as pronounced.  The big banks are working on their own network called clearXchange that could provide stiff competition. VeriFone just bought European payment processor Point for around $1B.  By itself this would not have caught my attention except for the fact that VeriFone also announced the acquisition of GlobalBay earlier this month.  In addition to their core business of selling stand-beside payment terminals, with GlobalBay they get employee-operated mobile selling tools and with Point they get a very big payment processing platform. MasterCard and Intel announced a partnership around payments, starting with PayPass, MasterCard's new payment technology.  Intel will lend its expertise to add additional levels of security, which seems to be the biggest barrier for consumer adoption.  Everyone is scrambling to get their piece of cash transactions, which still represents 85% of all transactions. Apple was awarded another mobile payment patent further cementing the rumors that the iPhone 5 will support NFC payments.  As usual, Apple is upsetting the apple cart (sorry) by moving control of key data from the carriers to Apple.  With Apple's vast number of iTunes accounts, they have a ready-made customer base to use the payment infrastructure, which I bet will slowly transition people away from credit cards and toward cheaper ACH.  Gary Schwartz explains the three step process Apple is taking to become a payment processor. Below is a picture I drew representing payments in the retail industry. There's certainly a lot of innovation happening.

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • flush output in Bourne Shell

    - by n-alexander
    I use echo in Upstart scripts to log things: script echo "main: some data" >> log end script post-start script echo "post-start: another data" >> log end script Now these two run in parallel, so in the logs I often see: main: post-start: some data another data This is not critical, so I won't employ proper synching, but thought I'd turn auto flush ON to at least reduce this effect. Is there an easy way to do that? Update: yes, flushing will not properly fix it, but I've seen it help such situations to some degree, and this is all I need in this case. It's just that I don't know how to do it in Shell

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >