Search Results

Search found 58440 results on 2338 pages for 'data cleansing'.

Page 481/2338 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • Seeking on a Heap, and Two Useful DMVs

    - by Paul White
    So far in this mini-series on seeks and scans, we have seen that a simple ‘seek’ operation can be much more complex than it first appears.  A seek can contain one or more seek predicates – each of which can either identify at most one row in a unique index (a singleton lookup) or a range of values (a range scan).  When looking at a query plan, we will often need to look at the details of the seek operator in the Properties window to see how many operations it is performing, and what type of operation each one is.  As you saw in the first post in this series, the number of hidden seeking operations can have an appreciable impact on performance. Measuring Seeks and Scans I mentioned in my last post that there is no way to tell from a graphical query plan whether you are seeing a singleton lookup or a range scan.  You can work it out – if you happen to know that the index is defined as unique and the seek predicate is an equality comparison, but there’s no separate property that says ‘singleton lookup’ or ‘range scan’.  This is a shame, and if I had my way, the query plan would show different icons for range scans and singleton lookups – perhaps also indicating whether the operation was one or more of those operations underneath the covers. In light of all that, you might be wondering if there is another way to measure how many seeks of either type are occurring in your system, or for a particular query.  As is often the case, the answer is yes – we can use a couple of dynamic management views (DMVs): sys.dm_db_index_usage_stats and sys.dm_db_index_operational_stats. Index Usage Stats The index usage stats DMV contains counts of index operations from the perspective of the Query Executor (QE) – the SQL Server component that is responsible for executing the query plan.  It has three columns that are of particular interest to us: user_seeks – the number of times an Index Seek operator appears in an executed plan user_scans – the number of times a Table Scan or Index Scan operator appears in an executed plan user_lookups – the number of times an RID or Key Lookup operator appears in an executed plan An operator is counted once per execution (generating an estimated plan does not affect the totals), so an Index Seek that executes 10,000 times in a single plan execution adds 1 to the count of user seeks.  Even less intuitively, an operator is also counted once per execution even if it is not executed at all.  I will show you a demonstration of each of these things later in this post. Index Operational Stats The index operational stats DMV contains counts of index and table operations from the perspective of the Storage Engine (SE).  It contains a wealth of interesting information, but the two columns of interest to us right now are: range_scan_count – the number of range scans (including unrestricted full scans) on a heap or index structure singleton_lookup_count – the number of singleton lookups in a heap or index structure This DMV counts each SE operation, so 10,000 singleton lookups will add 10,000 to the singleton lookup count column, and a table scan that is executed 5 times will add 5 to the range scan count. The Test Rig To explore the behaviour of seeks and scans in detail, we will need to create a test environment.  The scripts presented here are best run on SQL Server 2008 Developer Edition, but the majority of the tests will work just fine on SQL Server 2005.  A couple of tests use partitioning, but these will be skipped if you are not running an Enterprise-equivalent SKU.  Ok, first up we need a database: USE master; GO IF DB_ID('ScansAndSeeks') IS NOT NULL DROP DATABASE ScansAndSeeks; GO CREATE DATABASE ScansAndSeeks; GO USE ScansAndSeeks; GO ALTER DATABASE ScansAndSeeks SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE ScansAndSeeks SET AUTO_CLOSE OFF, AUTO_SHRINK OFF, AUTO_CREATE_STATISTICS OFF, AUTO_UPDATE_STATISTICS OFF, PARAMETERIZATION SIMPLE, READ_COMMITTED_SNAPSHOT OFF, RESTRICTED_USER ; Notice that several database options are set in particular ways to ensure we get meaningful and reproducible results from the DMVs.  In particular, the options to auto-create and update statistics are disabled.  There are also three stored procedures, the first of which creates a test table (which may or may not be partitioned).  The table is pretty much the same one we used yesterday: The table has 100 rows, and both the key_col and data columns contain the same values – the integers from 1 to 100 inclusive.  The table is a heap, with a non-clustered primary key on key_col, and a non-clustered non-unique index on the data column.  The only reason I have used a heap here, rather than a clustered table, is so I can demonstrate a seek on a heap later on.  The table has an extra column (not shown because I am too lazy to update the diagram from yesterday) called padding – a CHAR(100) column that just contains 100 spaces in every row.  It’s just there to discourage SQL Server from choosing table scan over an index + RID lookup in one of the tests. The first stored procedure is called ResetTest: CREATE PROCEDURE dbo.ResetTest @Partitioned BIT = 'false' AS BEGIN SET NOCOUNT ON ; IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; IF @Partitioned = 'true' BEGIN -- Enterprise, Trial, or Developer -- required for partitioning tests IF SERVERPROPERTY('EngineEdition') = 3 BEGIN EXECUTE (' DROP TABLE dbo.Example ; IF EXISTS ( SELECT 1 FROM sys.partition_schemes WHERE name = N''PS'' ) DROP PARTITION SCHEME PS ; IF EXISTS ( SELECT 1 FROM sys.partition_functions WHERE name = N''PF'' ) DROP PARTITION FUNCTION PF ; CREATE PARTITION FUNCTION PF (INTEGER) AS RANGE RIGHT FOR VALUES (20, 40, 60, 80, 100) ; CREATE PARTITION SCHEME PS AS PARTITION PF ALL TO ([PRIMARY]) ; CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ON PS (key_col); '); END ELSE BEGIN RAISERROR('Invalid SKU for partition test', 16, 1); RETURN; END; END ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; END; GO The second stored procedure, ShowStats, displays information from the Index Usage Stats and Index Operational Stats DMVs: CREATE PROCEDURE dbo.ShowStats @Partitioned BIT = 'false' AS BEGIN -- Index Usage Stats DMV (QE) SELECT index_name = ISNULL(I.name, I.type_desc), scans = IUS.user_scans, seeks = IUS.user_seeks, lookups = IUS.user_lookups FROM sys.dm_db_index_usage_stats AS IUS JOIN sys.indexes AS I ON I.object_id = IUS.object_id AND I.index_id = IUS.index_id WHERE IUS.database_id = DB_ID(N'ScansAndSeeks') AND IUS.object_id = OBJECT_ID(N'dbo.Example', N'U') ORDER BY I.index_id ; -- Index Operational Stats DMV (SE) IF @Partitioned = 'true' SELECT index_name = ISNULL(I.name, I.type_desc), partitions = COUNT(IOS.partition_number), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; ELSE SELECT index_name = ISNULL(I.name, I.type_desc), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; END; The final stored procedure, RunTest, executes a query written against the example table: CREATE PROCEDURE dbo.RunTest @SQL VARCHAR(8000), @Partitioned BIT = 'false' AS BEGIN -- No execution plan yet SET STATISTICS XML OFF ; -- Reset the test environment EXECUTE dbo.ResetTest @Partitioned ; -- Previous call will throw an error if a partitioned -- test was requested, but SKU does not support it IF @@ERROR = 0 BEGIN -- IO statistics and plan on SET STATISTICS XML, IO ON ; -- Test statement EXECUTE (@SQL) ; -- Plan and IO statistics off SET STATISTICS XML, IO OFF ; EXECUTE dbo.ShowStats @Partitioned; END; END; The Tests The first test is a simple scan of the heap table: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example'; The top result set comes from the Index Usage Stats DMV, so it is the Query Executor’s (QE) view.  The lower result is from Index Operational Stats, which shows statistics derived from the actions taken by the Storage Engine (SE).  We see that QE performed 1 scan operation on the heap, and SE performed a single range scan.  Let’s try a single-value equality seek on a unique index next: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 32'; This time we see a single seek on the non-clustered primary key from QE, and one singleton lookup on the same index by the SE.  Now for a single-value seek on the non-unique non-clustered index: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32'; QE shows a single seek on the non-clustered non-unique index, but SE shows a single range scan on that index – not the singleton lookup we saw in the previous test.  That makes sense because we know that only a single-value seek into a unique index is a singleton seek.  A single-value seek into a non-unique index might retrieve any number of rows, if you think about it.  The next query is equivalent to the IN list example seen in the first post in this series, but it is written using OR (just for variety, you understand): EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32 OR data = 33'; The plan looks the same, and there’s no difference in the stats recorded by QE, but the SE shows two range scans.  Again, these are range scans because we are looking for two values in the data column, which is covered by a non-unique index.  I’ve added a snippet from the Properties window to show that the query plan does show two seek predicates, not just one.  Now let’s rewrite the query using BETWEEN: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data BETWEEN 32 AND 33'; Notice the seek operator only has one predicate now – it’s just a single range scan from 32 to 33 in the index – as the SE output shows.  For the next test, we will look up four values in the key_col column: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col IN (2,4,6,8)'; Just a single seek on the PK from the Query Executor, but four singleton lookups reported by the Storage Engine – and four seek predicates in the Properties window.  On to a more complex example: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WITH (INDEX([PK dbo.Example key_col])) WHERE key_col BETWEEN 1 AND 8'; This time we are forcing use of the non-clustered primary key to return eight rows.  The index is not covering for this query, so the query plan includes an RID lookup into the heap to fetch the data and padding columns.  The QE reports a seek on the PK and a lookup on the heap.  The SE reports a single range scan on the PK (to find key_col values between 1 and 8), and eight singleton lookups on the heap.  Remember that a bookmark lookup (RID or Key) is a seek to a single value in a ‘unique index’ – it finds a row in the heap or cluster from a unique RID or clustering key – so that’s why lookups are always singleton lookups, not range scans. Our next example shows what happens when a query plan operator is not executed at all: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 8 AND @@TRANCOUNT < 0'; The Filter has a start-up predicate which is always false (if your @@TRANCOUNT is less than zero, call CSS immediately).  The index seek is never executed, but QE still records a single seek against the PK because the operator appears once in an executed plan.  The SE output shows no activity at all.  This next example is 2008 and above only, I’m afraid: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WHERE key_col BETWEEN 1 AND 30', @Partitioned = 'true'; This is the first example to use a partitioned table.  QE reports a single seek on the heap (yes – a seek on a heap), and the SE reports two range scans on the heap.  SQL Server knows (from the partitioning definition) that it only needs to look at partitions 1 and 2 to find all the rows where key_col is between 1 and 30 – the engine seeks to find the two partitions, and performs a range scan seek on each partition. The final example for today is another seek on a heap – try to work out the output of the query before running it! EXECUTE dbo.RunTest @SQL = 'SELECT TOP (2) WITH TIES * FROM Example WHERE key_col BETWEEN 1 AND 50 ORDER BY $PARTITION.PF(key_col) DESC', @Partitioned = 'true'; Notice the lack of an explicit Sort operator in the query plan to enforce the ORDER BY clause, and the backward range scan. © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • BizTalk Envelopes explained

    - by Robert Kokuti
    Recently I've been trying to get some order into an ESB-BizTalk pub/sub scenario, and decided to wrap the payload into standardized envelopes. I have used envelopes before in a 'light weight' fashion, and I found that they can be quite useful and powerful if used systematically. Here is what I learned. The Theory In my experience, Envelopes are often underutilised in a BizTalk solution, and quite often their full potential is not well understood. Here I try to simplify the theory behind the Envelopes within BizTalk.   Envelopes can be used to attach additional data to the ‘real’ data (payload). This additional data can contain all routing and processing information, and allows treating the business data as a ‘black box’, possibly compressed and/or encrypted etc. The point here is that the infrastructure does not need to know anything about the business data content, just as a post man does not need to know the letter within the envelope. BizTalk has built-in support for envelopes through the XMLDisassembler and XMLAssembler pipeline components (these are part of the XMLReceive and XMLSend default pipelines). These components, among other things, perform the following: XMLDisassembler Extracts the payload from the envelope into the Message Body Copies data from the envelope into the message context, as specified by the property schema(s) associated by the envelope schema. Typically, once the envelope is through the XMLDisassembler, the payload is submitted into the Messagebox, and the rest of the envelope data are copied into the context of the submitted message. The XMLDisassembler uses the Property Schemas, referenced by the Envelope Schema, to determine the name of the promoted Message Context element.   XMLAssembler Wraps the Message Body inside the specified envelope schema Populates the envelope values from the message context, as specified by the property schema(s) associated by the envelope schema. Notice that there are no requirements to use the receiving envelope schema when sending. The sent message can be wrapped within any suitable envelope, regardless whether the message was originally received within an envelope or not. However, by sharing Property Schemas between Envelopes, it is possible to pass values from the incoming envelope to the outgoing envelope via the Message Context. The Practice Creating the Envelope Add a new Schema to the BizTalk project:   Envelopes are defined as schemas, with the <Schema> Envelope property set to Yes, and the root node’s Body XPath property pointing to the node which contains the payload. Typically, you’d create an envelope structure similar to this: Click on the <Schema> node and set the Envelope property to Yes. Then, click on the Envelope node, and set the Body XPath property pointing to the ‘Body’ node:   The ‘Body’ node is a Child Element, and its Data Structure Type is set to xs:anyType.  This allows the Body node to carry any payload data. The XMLReceive pipeline will submit the data found in the Body node, while the XMLSend pipeline will copy the message into the Body node, before sending to the destination. Promoting Properties Once you defined the envelope, you may want to promote the envelope data (anything other than the Body) as Property Fields, in order to preserve their value in the message context. Anything not promoted will be lost when the XMLDisassembler extracts the payload from the Body. Typically, this means you promote everything in the Header node. Property promotion uses associated Property Schemas. These are special BizTalk schemas which have a flat field structure. Property Schemas define the name of the promoted values in the Message Context, by combining the Property Schema’s Namespace and the individual Field names. It is worth being systematic when it comes to naming your schemas, their namespace and type name. A coherent method will make your life easier when it comes to referencing the schemas during development, and managing subscriptions (filters) in BizTalk Administration. I developed a fairly coherent naming convention which I’ll probably share in another article. Because the property schema must be flat, I recommend creating one for each level in the envelope header hierarchy. Property schemas are very useful in passing data between incoming as outgoing envelopes. As I mentioned earlier, in/out envelopes do not have to be the same, but you can use the same property schema when you promote the outgoing envelope fields as you used for the incoming schema.  As you can reference many property schemas for field promotion, you can pick data from a variety of sources when you define your outgoing envelope. For example, the outgoing envelope can carry some of the incoming envelope’s promoted values, plus some values from the standard BizTalk message context, like the AdapterReceiveCompleteTime property from the BizTalk message-tracking properties. The values you promote for the outgoing envelope will be automatically populated from the Message Context by the XMLAssembler pipeline component. Using the Envelope Receiving Enveloped messages are automatically recognized by the XMLReceive pipeline, or any other custom pipeline which includes the XMLDisassembler component. The Body Path node will become the Message Body, while the rest of the envelope values will be added to the Message context, as defined by the Property Shemas referenced by the Envelope Schema. Sending The Send Port’s filter expression can use the promoted properties from the incoming envelope. If you want to enclose the sent message within an envelope, the Send Port XMLAssembler component must be configured with the fully qualified envelope name:   One way of obtaining the fully qualified envelope name is copy it off from the envelope schema property page: The full envelope schema name is constructed as <Name>, <Assembly> The outgoing envelope is populated by the XMLAssembler pipeline component. The Message Body is copied to the specified envelope’s Body Path node, while the rest of the envelope fields are populated from the Message Context, according to the Property Schemas associated with the Envelope Schema. That’s all for now, happy enveloping!

    Read the article

  • Developing Schema Compare for Oracle (Part 5): Query Snapshots

    - by Simon Cooper
    If you've emailed us about a bug you've encountered with the EAP or beta versions of Schema Compare for Oracle, we probably asked you to send us a query snapshot of your databases. Here, I explain what a query snapshot is, and how it helps us fix your bug. Problem 1: Debugging users' bug reports When we started the Schema Compare project, we knew we were going to get problems with users' databases - configurations we hadn't considered, features that weren't installed, unicode issues, wierd dependencies... With SQL Compare, users are generally happy to send us a database backup that we can restore using a single RESTORE DATABASE command on our test servers and immediately reproduce the problem. Oracle, on the other hand, would be a lot more tricky. As Oracle generally has a 1-to-1 mapping between instances and databases, any databases users sent would have to be restored to their own instance. Furthermore, the number of steps required to get a properly working database, and the size of most oracle databases, made it infeasible to ask every customer who came across a bug during our beta program to send us their databases. We also knew that there would be lots of issues with data security that would make it hard to get backups. So we needed an easier way to be able to debug customers issues and sort out what strange schema data Oracle was returning. Problem 2: Test execution time Another issue we knew we would have to solve was the execution time of the tests we would produce for the Schema Compare engine. Our initial prototype showed that querying the data dictionary for schema information was going to be slow (at least 15 seconds per database), and this is generally proportional to the size of the database. If you're running thousands of tests on the same databases, each one registering separate schemas, not only would the tests would take hours and hours to run, but the test servers would be hammered senseless. The solution To solve these, we needed to be able to populate the schema of a database without actually connecting to it. Well, the IDataReader interface is the primary way we read data from an Oracle server. The data dictionary queries we use return their data in terms of simple strings and numbers, which we then process and reconstruct into an object model, and the results of these queries are identical for identical schemas. So, we can record the raw results of the queries once, and then replay these results to construct the same object model as many times as required without needing to actually connect to the original database. This is what query snapshots do. They are binary files containing the raw unprocessed data we get back from the oracle server for all the queries we run on the data dictionary to get schema information. The core of the query snapshot generation takes the results of the IDataReader we get from running queries on Oracle, and passes the row data to a BinaryWriter that writes it straight to a file. The query snapshot can then be replayed to create the same object model; when the results of a specific query is needed by the population code, we can simply read the binary data stored in the file on disk and present it through an IDataReader wrapper. This is far faster than querying the server over the network, and allows us to run tests in a reasonable time. They also allow us to easily debug a customers problem; using a simple snapshot generation program, users can generate a query snapshot that could be sent along with a bug report that we can immediately replay on our machines to let us debug the issue, rather than having to obtain database backups and restore databases to test systems. There are also far fewer problems with data security; query snapshots only contain schema information, which is generally less sensitive than table data. Query snapshots implementation However, actually implementing such a feature did have a couple of 'gotchas' to it. My second blog post detailed the development of the dependencies algorithm we use to ensure we get all the dependencies in the database, and that algorithm uses data from both databases to find all the needed objects - what database you're comparing to affects what objects get populated from both databases. We get information on these additional objects using an appropriate WHERE clause on all the population queries. So, in order to accurately replay the results of querying the live database, the query snapshot needs to be a snapshot of a comparison of two databases, not just populating a single database. Furthermore, although the code population queries (eg querying all_tab_cols to get column information) can simply be passed straight from the IDataReader to the BinaryWriter, we need to hook into and run the live dependencies algorithm while we're creating the snapshot to ensure we get the same WHERE clauses, and the same query results, as if we were populating straight from a live system. We also need to store the results of the dependencies queries themselves, as the resulting dependency graph is stored within the OracleDatabase object that is produced, and is later used to help order actions in synchronization scripts. This is significantly helped by the dependencies algorithm being a deterministic algorithm - given the same input, it will always return the same output. Therefore, when we're replaying a query snapshot, and processing dependency information, we simply have to return the results of the queries in the order we got them from the live database, rather than trying to calculate the contents of all_dependencies on the fly. Query snapshots are a significant feature in Schema Compare that really helps us to debug problems with the tool, as well as making our testers happier. Although not really user-visible, they are very useful to the development team to help us fix bugs in the product much faster than we otherwise would be able to.

    Read the article

  • Why would a WebService return nulls when the actual service returns data?

    - by Jerry
    I have a webservice (out of my control) that I have to talk to. I also have a packet-sniffer on the line, and (SURPRISE!!!) the developers of the webservice aren't lying. They are actually sending back all of the data that I requested. But the web-service code that is auto-generated from the WSDL file is giving me "null" as a value. I used their WSDL file to generate my Web Reference. I checked my data types with the datatypes that the WSDL file has declared. And I used the code as listed below to perform the calls: DT_MaterialMaster_LookupRequest req = new DT_MaterialMaster_LookupRequest(); req.MaterialNumber = "101*"; req.DocumentNo = ""; req.Description = "Pipe*"; req.Plant = "0000"; MI_MaterialMaster_Lookup_OBService srv = new MI_MaterialMaster_Lookup_OBService(); DT_MaterialMaster_Response resp = srv.MI_MaterialMaster_Lookup_OB(new DT_MaterialMaster_LookupRequest[] { req }); // Note that the response here is ALWAYS null!! Console.WriteLine(resp.Status); The resp object is an actual object. It was generated properly. However, the Status and MaterialData fields are always null. When I call the web service, I've placed a packet-sniffer on the line, and I can see that I've sent the following (linebreaks and indentions for my own sanity): <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <MT_MaterialMaster_Lookup xmlns="http://MyCompany.com/SomeCompany/mm/MaterialMasterSearch"> <Request xmlns=""> <MaterialNumber>101*</MaterialNumber> <Description>Pipe*</Description> <DocumentNo /> <Plant>0000</Plant> </Request> </MT_MaterialMaster_Lookup> </soap:Body> </soap:Envelope> The response that they send back SEEMS to be a valid response (linebreaks and indentions for my own sanity): <SOAP:Envelope xmlns:SOAP='http://schemas.xmlsoap.org/soap/envelope/'> <SOAP:Header /> <SOAP:Body> <n0:MT_MaterialMaster_Response xmlns:n0='http://MyCompany.com/SomeCompany/mm/MaterialMasterSearch' xmlns:prx='urn:SomeCompany.com:proxy:BRD:/1SAI/TAS4FE14A2DE960D61219AE:701:2009/02/10'> <Response> <Status>No Rows Found</Status> <MaterialData /> </Response> </n0:MT_MaterialMaster_Response> </SOAP:Body> </SOAP:Envelope> The status shows that it actually received data... but the resp.Status and resp.MaterialData fields are always null. What have I done wrong? UPDATE: The WSDL file is defined as: <?xml version="1.0" encoding="utf-8"?> <wsdl:definitions xmlns:p1="http://MyCompany.com/SomeCompany/mm/MaterialMasterSearch" name="MI_MaterialMaster_Lookup_AutoCAD_OB" targetNamespace="http://MyCompany.com/SomeCompany/mm/MaterialMasterSearch" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"> <wsdl:types> <xsd:schema xmlns="http://MyCompany.com/SomeCompany/mm/MaterialMasterSearch" targetNamespace="http://MyCompany.com/SomeCompany/mm/MaterialMasterSearch" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <xsd:element name="MT_MaterialMaster_Response" type="p1:DT_MaterialMaster_Response" /> <xsd:element name="MT_MaterialMaster_Lookup" type="p1:DT_MaterialMaster_Lookup" /> <xsd:complexType name="DT_MaterialMaster_Response"> <xsd:sequence> <xsd:element name="Status" type="xsd:string"> <xsd:annotation> <xsd:appinfo source="http://SomeCompany.com/xi/TextID">d48d03b040af11df99e300145eccb24e</xsd:appinfo> </xsd:annotation> </xsd:element> <xsd:element maxOccurs="unbounded" name="MaterialData"> <xsd:annotation> <xsd:appinfo source="http://SomeCompany.com/xi/TextID">64908aa040a511df843700145eccb24e</xsd:appinfo> </xsd:annotation> <xsd:complexType> <xsd:sequence> <xsd:element name="MaterialNumber" type="xsd:string"> <xsd:annotation> <xsd:appinfo source="http://SomeCompany.com/xi/TextID">64908aa140a511df848500145eccb24e</xsd:appinfo> </xsd:annotation> </xsd:element> <xsd:element minOccurs="0" name="Description" type="xsd:string"> <xsd:annotation> <xsd:appinfo source="http://SomeCompany.com/xi/TextID">64908aa240a511df95bf00145eccb24e</xsd:appinfo> </xsd:annotation> </xsd:element> <xsd:element minOccurs="0" name="DocumentNo" type="xsd:string"> <xsd:annotation> <xsd:appinfo source="http://SomeCompany.com/xi/TextID">64908aa340a511dfb23700145eccb24e</xsd:appinfo> </xsd:annotation> </xsd:element> <xsd:element minOccurs="0" name="UOM" type="xsd:string"> <xsd:annotation> <xsd:appinfo source="http://SomeCompany.com/xi/TextID">3b5f14c040a611df9fbe00145eccb24e</xsd:appinfo> </xsd:annotation> </xsd:element> <xsd:element minOccurs="0" name="Hierarchy" type="xsd:string"> <xsd:annotation> <xsd:appinfo source="http://SomeCompany.com/xi/TextID">64908aa440a511dfc65b00145eccb24e</xsd:appinfo> </xsd:annotation> </xsd:element> <xsd:element minOccurs="0" name="Plant" type="xsd:string"> <xsd:annotation> <xsd:appinfo source="http://SomeCompany.com/xi/TextID">d48d03b140af11dfb78e00145eccb24e</xsd:appinfo> </xsd:annotation> </xsd:element> <xsd:element minOccurs="0" name="Procurement" type="xsd:string"> <xsd:annotation> <xsd:appinfo source="http://SomeCompany.com/xi/TextID">d48d03b240af11dfb87b00145eccb24e</xsd:appinfo> </xsd:annotation> </xsd:element> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:sequence> </xsd:complexType> <xsd:complexType name="DT_MaterialMaster_Lookup"> <xsd:sequence> <xsd:element maxOccurs="unbounded" name="Request"> <xsd:annotation> <xsd:appinfo source="http://SomeCompany.com/xi/TextID">64908aa040a511df843700145eccb24e</xsd:appinfo> </xsd:annotation> <xsd:complexType> <xsd:sequence> <xsd:element minOccurs="0" name="MaterialNumber" type="xsd:string"> <xsd:annotation> <xsd:appinfo source="http://SomeCompany.com/xi/TextID">64908aa140a511df848500145eccb24e</xsd:appinfo> </xsd:annotation> </xsd:element> <xsd:element minOccurs="0" name="Description" type="xsd:string"> <xsd:annotation> <xsd:appinfo source="http://SomeCompany.com/xi/TextID">64908aa240a511df95bf00145eccb24e</xsd:appinfo> </xsd:annotation> </xsd:element> <xsd:element minOccurs="0" name="DocumentNo" type="xsd:string"> <xsd:annotation> <xsd:appinfo source="http://SomeCompany.com/xi/TextID">64908aa340a511dfb23700145eccb24e</xsd:appinfo> </xsd:annotation> </xsd:element> <xsd:element minOccurs="0" name="Plant" type="xsd:string"> <xsd:annotation> <xsd:appinfo source="http://SomeCompany.com/xi/TextID">64908aa440a511dfc65b00145eccb24e</xsd:appinfo> </xsd:annotation> </xsd:element> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:sequence> </xsd:complexType> </xsd:schema> </wsdl:types> <wsdl:message name="MT_MaterialMaster_Lookup"> <wsdl:part name="MT_MaterialMaster_Lookup" element="p1:MT_MaterialMaster_Lookup" /> </wsdl:message> <wsdl:message name="MT_MaterialMaster_Response"> <wsdl:part name="MT_MaterialMaster_Response" element="p1:MT_MaterialMaster_Response" /> </wsdl:message> <wsdl:portType name="MI_MaterialMaster_Lookup_AutoCAD_OB"> <wsdl:operation name="MI_MaterialMaster_Lookup_AutoCAD_OB"> <wsdl:input message="p1:MT_MaterialMaster_Lookup" /> <wsdl:output message="p1:MT_MaterialMaster_Response" /> </wsdl:operation> </wsdl:portType> <wsdl:binding name="MI_MaterialMaster_Lookup_AutoCAD_OBBinding" type="p1:MI_MaterialMaster_Lookup_AutoCAD_OB"> <binding transport="http://schemas.xmlsoap.org/soap/http" xmlns="http://schemas.xmlsoap.org/wsdl/soap/" /> <wsdl:operation name="MI_MaterialMaster_Lookup_AutoCAD_OB"> <operation soapAction="http://SomeCompany.com/xi/WebService/soap1.1" xmlns="http://schemas.xmlsoap.org/wsdl/soap/" /> <wsdl:input> <body use="literal" xmlns="http://schemas.xmlsoap.org/wsdl/soap/" /> </wsdl:input> <wsdl:output> <body use="literal" xmlns="http://schemas.xmlsoap.org/wsdl/soap/" /> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:service name="MI_MaterialMaster_Lookup_AutoCAD_OBService"> <wsdl:port name="MI_MaterialMaster_Lookup_AutoCAD_OBPort" binding="p1:MI_MaterialMaster_Lookup_AutoCAD_OBBinding"> <address location="http://bxdwas.MyCompany.com/XISOAPAdapter/MessageServlet?channel=:AutoCAD:SOAP_SND_Material_Lookup" xmlns="http://schemas.xmlsoap.org/wsdl/soap/" /> </wsdl:port> </wsdl:service> </wsdl:definitions>

    Read the article

  • directory with 980MB meta data, millions of files, how to delete it? (ext3)

    - by Alexandre
    Hello, So I'm stuck with this directory: drwxrwxrwx 2 dan users 980M 2010-12-22 18:38 sessions2 The directories contents is small - just millions of tiny little files. I want to wipe it from the filesystem but have been unable to. My first try was: find sessions2 -type f -delete and find sessions2 -type f -print0 | xargs -0 rm -f but had to stop because both caused escalating memory usage. At one point it was using 65% of the system's memory. So I thought (no doubt incorrectly), that it had to do with the fact that dir_index was enabled on the system. Perhaps find was trying to read the entire index into memory? So I did this (foolishly): tune2fs -O^dir_index /dev/xxx Alright, so that should do it. Ran the find command above again and... same thing. Crazy memory usage. I hurriedly ran tune2fs -Odir_index /dev/xxx to reenable dir_index, and ran to Server Fault! 2 questions: 1) How do I get rid of this directory on my live system? I don't care how long it takes, as long as it uses little memory and little CPU. By the way, using nice find ... I was able to reduce CPU usage, so my problem right now is only memory usage. 2) I disabled dir_index for about 20 minutes. No doubt new files were written to the filesystem in the meanwhile. I reenabled dir_index. Does that mean the system will not find the files that were written before dir_index was reenabled since their filenames will be missing from the old indexes? If so and I know these new files aren't important, can I maintain the old indexes? If not, how do I rebuild the indexes? Can it be done on a live system? Thanks!

    Read the article

  • mysql UDF : fopen = permission denied

    - by lindenb
    Hi All, this is question I already asked on SO but I wonder if this could be a SysAdmin problem. I'm trying to create a mysql UDF function , this function calls "fopen/fclose" to read a flat file stored in /data. But using errno (yes, I know it is bad in a MT program...) I can see that the function cannot open my file: "Permission denied" I tried to do a chmod -R 755 /data (as well as 777, chown -R mysql:mysql /data etc...) but it didn't change anything. when I copied the flat file to /tmp : OK, my UDF was able to 'fopen' the file. I'm puzzled. currently , I've got: drwxrwxrwx 4 pierre root 4096 2010-05-26 16:51 /data drwxrwxrwx 3 pierre root 4096 2010-05-18 09:41 /data/dir1 drwxrwxrwx 3 pierre root 4096 2010-05-18 09:41 /data/dir1/dir2 drwxrwxrwx 4 pierre root 4096 2010-05-18 10:27 /data/dir1/dir2/dir3 -rw-r--r-- 1 pierre root 50685268 2005-12-10 00:01 /data/dir1/dir2/dir3/myfile.txt Any idea ?

    Read the article

  • What do you use to store all of your personal data?

    - by codeflunky
    I have been on a quest for years to find the perfect tool to store all "my stuff". You know... personal information, code snippets, software keys, people's birthdays, whatever. There are lots of tools out there for this sort of thing, but I've never found any of them quite what I need. Ideally, I would just be able to type some notes, tag them (I don't like the idea of folder organization... too cumbersome) and then easily search and retrieve what I need later. It seems so simple, but for some reason I just can't find it. I currently use Backpack (sometimes), which is OK, but I hate the fact that you always have to create "pages" to store things. I don't want to have to do that. I want to just type some notes, tag it and save. That's it. And Backpack didn't even have search for a long time. What I do like about Backpack is that it's fast and it's web based. I've tried some desktop apps, which probably came closer to the functionality I want, but I just hate being tied to a single machine. I want to be able to get to my stuff anywhere, so the web based thing is a definite requirement. Anyway, I'm thinking about writing my own thing for this if I can't find anything, but before I make the attempt, I was wondering if anyone has any suggestions? I've used Backpack, Zoho Planner, Stikkit and Google Notes so far, and they are not quite to my liking. Anyone? (Sorry if this is off-topic, but I figured you guys might be legitimately into this kind of thing... you know, storing code snippets and such.) UPDATE: I've been using Evernote for a few days, and it is exactly what I've been looking for. It is totally tag based and allows both online and offline usage. The desktop app sits in your system tray and allows you to add whatever you want on the fly either as text notes or clippings from the browser. It also syncs it to the web (if you want) where you can get to it from anywhere using their web client. They even have a mobile client which I haven't used, but I will try it soon. Thanks again 18hrs. I wish I could give you 10 upvotes.

    Read the article

  • Why Is ModSecurity Unable to Access the Data Directory?

    - by tommytwoeyes
    Update I think we've solved this; the problem appears to have been a result of the /modsec_storage directory having an incorrect value for its SELinux context type. However, we're still not sure, because although after I changed the SELinux context type value, Apache was able to create files in that directory for the global and ip collections (global.dir/global.pag and ip.dir/ip.pag), the new files still have zero bytes. I'm new to ModSecurity and am not sure if the files are empty because something is wrong with the configuration or if ModSecurity has simply determined it doesn't need to store IP addresses persistently after each transaction ends. Anyone able to offer guidance here? I've recently installed ModSecurity (v2.5.12 / CRS v2.0.8) on our production server, and everything works great, except for these errors that it keeps writing to the Apache error log: Failed to access DBM file "/modsec_storage/global": Permission denied [hostname "www.internationalstudent.com"] [uri "/includes/soc_bookmarks/images/delicious.png"] [unique_id "LZ6jc38AAAEAAFO6408AAABO"] Failed to access DBM file "/modsec_storage/ip": Permission denied [hostname "www.internationalstudent.com"] [uri "/includes/soc_bookmarks/images/delicious.png"] [unique_id "LZ6jc38AAAEAAFO6408AAABO"] After following the instructions for file permission settings in the ModSecurity handbook by Ivan Ristic, with no success, I created a /modsec_storage directory, set the owner & group to apache, and set the permissions for the directory recursively to 777. However, ModSecurity is still reporting the same permission errors, so I am stumped. Can anyone tell me how to fix this?

    Read the article

  • Create mirror software raid with bad blocks hdd. How to check data integrity?

    - by rumburak
    There is error in System event log like this one: "The device, \Device\Harddisk1\DR1, has a bad block." Because of above I created Raid 1 on this disk and other one. I'm using Windows Server 2008 R2 software RAID volumes. Volume in Disk Manager is marked as "Failed Redundancy" and "At Risk". I could command to "Reactivate Disk" and it's starts to re-sync, but after a while it stops and returns to previous state. It stops re-sync on bad block on old disk and creates same error in System event log. Old disk status is Errors, new disk status is Online. How can I check that there is exact copy of the old disk on new one ? It is server machine so I would prefer to keep it running during this check.

    Read the article

  • Macro keeps crashing need to speed it up or rewrite it, excel vba 50,000 lines of data

    - by Joel
    Trying to speed up a macro that runs over 50,000 lines ! I have two ways of performing the same vba macro Sub deleteCommonValue() Dim aRow, bRow As Long Dim colB_MoreFirst, colB_LessFirst, colB_Second, colC_MoreFirst, colC_LessFirst, colC_Second As Integer Dim colD_First, colD_Second As Integer Application.ScreenUpdating = False Application.DisplayStatusBar = False Application.Calculation = xlCalculationManual Application.EnableEvents = False aRow = 2 bRow = 3 colB_MoreFirst = Range("B" & aRow).Value + 0.05 colB_LessFirst = Range("B" & aRow).Value - 0.05 colB_Second = Range("B" & bRow).Value colC_MoreFirst = Range("C" & aRow).Value + 0.05 colC_LessFirst = Range("C" & aRow).Value - 0.05 colC_Second = Range("C" & bRow).Value colD_First = Range("D" & aRow).Value colD_Second = Range("D" & bRow).Value Do If colB_Second <= colB_MoreFirst And colB_Second >= colB_LessFirst Then If colC_Second <= colC_MoreFirst And colC_Second >= colC_LessFirst Then If colD_Second = colD_First Or colD_Second > colD_First Then Range(bRow & ":" & bRow).Delete 'bRow delete, assign new value to bRow colB_Second = Range("B" & bRow).Value colC_Second = Range("C" & bRow).Value colD_Second = Range("D" & bRow).Value '----------------------------------------------------- Else Range(aRow & ":" & aRow).Delete bRow = aRow + 1 'aRow value deleted, assign new value to aRow and bRow colB_MoreFirst = Range("B" & aRow).Value + 0.05 colB_LessFirst = Range("B" & aRow).Value - 0.05 colB_Second = Range("B" & bRow).Value colC_MoreFirst = Range("C" & aRow).Value + 0.05 colC_LessFirst = Range("C" & aRow).Value - 0.05 colC_Second = Range("C" & bRow).Value colD_First = Range("D" & aRow).Value colD_Second = Range("D" & bRow).Value '----------------------------------------------------- End If Else bRow = bRow + 1 'Assign new value to bRow colB_Second = Range("B" & bRow).Value colC_Second = Range("C" & bRow).Value colD_Second = Range("D" & bRow).Value '----------------------------------------------------- End If Else bRow = bRow + 1 'Assign new value to bRow colB_Second = Range("B" & bRow).Value colC_Second = Range("C" & bRow).Value colD_Second = Range("D" & bRow).Value '----------------------------------------------------- End If If IsEmpty(Range("D" & bRow).Value) = True Then aRow = aRow + 1 bRow = aRow + 1 'finish compare aRow, assign new value to aRow and bRow colB_MoreFirst = Range("B" & aRow).Value + 0.05 colB_LessFirst = Range("B" & aRow).Value - 0.05 colB_Second = Range("B" & bRow).Value colC_MoreFirst = Range("C" & aRow).Value + 0.05 colC_LessFirst = Range("C" & aRow).Value - 0.05 colC_Second = Range("C" & bRow).Value colD_First = Range("D" & aRow).Value colD_Second = Range("D" & bRow).Value '----------------------------------------------------- End If Loop Until IsEmpty(Range("D" & aRow).Value) = True Application.ScreenUpdating = False Application.DisplayStatusBar = False Application.Calculation = xlCalculationAutomatic Application.EnableEvents = False End Sub or Sub deleteCommonValue() Dim aRow, bRow As Long Application.ScreenUpdating = False aRow = 2 bRow = 3 Do If Range("B" & bRow).Value <= (Range("B" & aRow).Value + 0.05) _ And Range("B" & bRow).Value >= (Range("B" & aRow).Value - 0.05) Then If Range("C" & bRow).Value <= (Range("C" & aRow).Value + 0.05) _ And Range("C" & bRow).Value >= (Range("C" & aRow).Value - 0.05) Then If Range("D" & bRow).Value = (Range("D" & aRow).Value) _ Or Range("D" & bRow).Value > (Range("D" & aRow).Value) Then Range(bRow & ":" & bRow).Delete Else Range(aRow & ":" & aRow).Delete bRow = aRow + 1 Range("A" & aRow).Select End If Else bRow = bRow + 1 Range("A" & bRow).Select End If Else bRow = bRow + 1 Range("A" & bRow).Select End If If IsEmpty(Range("D" & bRow).Value) = True Then aRow = aRow + 1 bRow = aRow + 1 End If Loop Until IsEmpty(Range("D" & aRow).Value) = True End Sub I dont know if my best option will be to split the rows into multiple sheets?

    Read the article

  • Can I cancel a resize operation in GParted without causing data loss?

    - by Anderson Green
    I'm currently waiting for GParted to finish resizing a partition, but the progress bar is currently at 0, and it's been taking much longer than usual (perhaps an hour). Is it safe to cancel the resize operation? I don't want to wait days for the resize operation to complete, but I don't want to lose all of my files either. (Is there any way that I can simply pause the resize operation, attempt to recover files, and then resume the resize operation?) (An update: the operation has finally completed, and my files are still intact!)

    Read the article

  • How many times can data be read from a USB flash drive?

    - by John
    While I am aware that performing writes on a USB flash drive degrades the life expectancy of the device. I have heard the quantity of writes is anywhere from 100 thousand to 10 million, but I have not heard about number of read operations. Does reading from the device count toward this total? I am interested in writing only once to a flash drive and setting it to read-only. Then reading files from the device a thousand or more times per day, but am wondering if (at say 1,000 reads per day), the flash drive will need to be replaced within 100 days (assuming a 100,000 r/w cycle lifetime)?

    Read the article

  • Security considerations on Importing Bulk Data by Using BULK INSERT or OPENROWSET(BULK...)

    - by Ice
    I do not understand the following article profound. http://msdn.microsoft.com/en-us/library/ms175915(SQL.90).aspx "In contrast, if a SQL Server user logs on by using Windows Authentication, the user can read only those files that can be accessed by the user account, regardless of the security profile of the SQL Server process." What if i define a SQL-Agent Job to perform this bulk-Insert; Is it the OWNER of the Job who gives the security-context?

    Read the article

  • AFP, SMB, NFS which is the best data transfer protocol ?

    - by Kami
    I have a computer with large hard disks running Gentoo. I have to serve med/big files via a wired network to Apple devices (all of them running OS X). Which protocol is the best for the following needs ? : Speed Ease of use (by the clients and the server) Less limited (max file size, limited charset for filenames) Security

    Read the article

  • nginx - 403 Forbidden

    - by michell90
    I've trouble to get aliases working correctly on nginx. When i try to access the aliases, /pma and /mba (see secure.example.com.conf), i get a 403 Forbidden but the base url works correctly. I read a lot of posts but nothing helped, so here i am. Nginx and php-fpm are running as www-data:www-data and the permissions for the directories are set to: drwxrwsr-x+ 5 www-data www-data 4.0K Dec 5 22:48 ./ drwxr-xr-x. 3 root root 4.0K Dec 4 22:50 ../ drwxrwsr-x+ 2 www-data www-data 4.0K Dec 5 13:10 mda.example.com/ drwxrwsr-x+ 11 www-data www-data 4.0K Dec 5 10:34 pma.example.com/ drwxrwsr-x+ 3 www-data www-data 4.0K Dec 5 11:49 www.example.com/ lrwxrwxrwx. 1 www-data www-data 18 Dec 5 09:56 secure.example.com -> www.example.com/ Im sorry for the bulk, but i thought better too much than too little. Here are the configuration files: /etc/nginx/nginx.conf user www-data www-data; worker_processes 1; error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; include /etc/nginx/sites-enabled/*; } /etc/nginx/sites-enabled/secure.example.com server { listen 80; server_name secure.example.com; return 301 https://$host$request_uri; } server { listen 443; server_name secure.example.com; access_log /var/log/nginx/secure.example.com.access.log; error_log /var/log/nginx/secure.example.com.error.log; root /srv/http/secure.example.com; include /etc/nginx/ssl/secure.example.com.conf; include /etc/nginx/conf.d/index.conf; include /etc/nginx/conf.d/php-ssl.conf; autoindex off; location /pma/ { alias /srv/http/pma.example.com; } location /mda/ { alias /srv/http/mda.example.com; } } /etc/nginx/ssl/secure.example.com.conf ssl on; ssl_certificate /etc/nginx/ssl/secure.example.com.crt; ssl_certificate_key /etc/nginx/ssl/secure.example.com.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/index.conf index index.php index.html index.htm; /etc/nginx/conf.d/php-ssl.conf location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param HTTPS on; fastcgi_param SCRIPT_FILENAME $request_filename; include fastcgi_params; } /var/log/nginx/secure.example.com.error.log 2013/12/05 22:49:04 [error] 29291#0: *2 directory index of "/srv/http/pma.example.com" is forbidden, client: 176.199.78.88, server: secure.example.com, request: "GET /pma/ HTTP/1.1", host: "secure.example.com" EDIT: forgot to mention, i'm running CentOS 6.4 x86_64 and nginx 1.0.15 Thanks in advance!

    Read the article

  • Puppet Directory and File ownership ignored

    - by Phil Sturgeon
    Puppet seems to be lying to me, which is not very nice. I am trying to set some files and directories included in /vagrant/src to be 666 and 777, and set the ownership group to the correct Apache user (using the PuppetLabs Apache module). Output from Puppet says yes. [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with /tmp/vagrant-puppet/manifests/default.pp... stdin: is not a tty No LSB modules are available. warning: require is a metaparam; this value will inherit to all contained resources warning: notify is a metaparam; this value will inherit to all contained resources notice: /Stage[main]//File[/vagrant/src/addons/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/addons/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/addons/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//Package[curl]/ensure: ensure changed 'purged' to 'present' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//File[/vagrant/src/system/cms/config/config.php]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/config.php]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//File[/vagrant/src/uploads/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/uploads/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/uploads/]/mode: mode changed '0755' to '0777' notice: /Stage[main]/Apache/Service[httpd]/ensure: ensure changed 'stopped' to 'running' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/mode: mode changed '0755' to '0777' notice: Finished catalog run in 2.29 seconds Output from ls -lah says no: $ ls -lah /vagrant/src/ total 36K drwxr-xr-x 1 vagrant vagrant 510 2012-07-03 00:11 . drwxr-xr-x 1 vagrant vagrant 340 2012-07-03 08:08 .. drwxr-xr-x 1 vagrant vagrant 136 2012-07-03 00:11 addons drwxr-xr-x 1 vagrant vagrant 102 2012-07-03 00:11 assets drwxr-xr-x 1 vagrant vagrant 510 2012-07-03 07:45 .git -rw-r--r-- 1 vagrant vagrant 1.3K 2012-07-03 00:11 .gitignore -rwxr-xr-x 1 vagrant vagrant 1.4K 2012-07-03 00:11 .htaccess -rwxr-xr-x 1 vagrant vagrant 8.8K 2012-07-03 00:11 index.php drwxr-xr-x 1 vagrant vagrant 442 2012-07-03 00:11 installer -rwxr-xr-x 1 vagrant vagrant 2.8K 2012-07-03 00:11 LICENSE -rw-r--r-- 1 vagrant vagrant 1.1K 2012-07-03 00:11 phpdoc.dist.xml -rw-r--r-- 1 vagrant vagrant 3.3K 2012-07-03 00:11 README.md drwxr-xr-x 1 vagrant vagrant 204 2012-07-03 00:11 system -rw-r--r-- 1 vagrant vagrant 42 2012-07-03 00:11 .travis.yml drwxr-xr-x 1 vagrant vagrant 102 2012-07-03 00:11 uploads Whats up with that? My entire config can be found here.

    Read the article

  • How to copy partition from one disc to another (boot partition keeping all the vital data)?

    - by Patryk
    I have bought a new laptop but the HDD, which runs at 5400 rpm, is not sufficient for me. The laptop runs Windows 7 64-bit. I have my 'old' one (a better one - Seagate Momentus 7200 rpm) and I would like to replace it but without reinstalling everything. And there my question arises: can I copy my boot partition from my laptop hard drive to my old drive so that it will boot from it properly? If so, then how to do it? Will Norton Ghost be useful here? My point would be to just replace this partition and leave the rest.

    Read the article

  • What's the best way to migrate SELECT applications and data from an old Mac to a one?

    - by jaydles
    I know I can make an easy transfer of everything using time machine, but I'm trying to transfer the minimum necessary to the new computer (applications I currently use, Aperture database, etc.) in order to keep the new machine as clean as possible, and avoid starting out with any legacy problems with permissions, etc. Clearly, I can copy individual apps and databases to an external drive and then install them on the new machine, but I'm trying to find an easier way.

    Read the article

  • Why can't I boot in to Windows Recovery Environment to fix my HDD or salvage my data?

    - by Kevin
    I've been trying to get in to WindowsRE to salvage the files on my Sony Vaio laptop after it failed to load Vista (it finally, consistently displays "Error loading operating system" after months of such intermittent failures, usually rectified via restarts or utilizing Startup Repair or CHKDSK from WindowsRE) . The problem is, after successfully accessing it once after this failure (and many times before over the course of the laptop's life), I can no longer get it to load. During the last successful access (right after the failure), I ran startup repair, which itself failed and notified me that the boot sector was corrupt. I attempted to head in to Sony's proprietary recovery tools menu, which is accessible from WindowsRE when it is loaded from the recovery partition or recovery disk, however it hung. I have since been unable to access the recovery environment after restarting, using any of these methods: Access via the recovery partition (pressing F10 on boot) Access via recovery DVD (created using the same computer when it was healthy) Access via a Windows Vista installation DVD All three methods produce the same results: The computer acknowledges the boot attempt The computer successfully gets passed the "Windows is loading files" screen The computer successfully gets passed the Windows loading screen The computer then stalls at a black screen, while showing HDD activity (via indicator light). After a few minutes, the HDD activity ceases, and after a few more minutes, the over sized cursor that is utilized in WindowsRE appears on the black screen. The actual recovery environment, however, never appears, even after leaving the computer in such a state overnight. What is fustrating is that other bootable utilities, such as SeaTools for DOS and MemTest, boot up and run fine. In running perfectly normally, MemTest was able to produce a plethora of errors utilizing my RAM. I'm inclined to believe the RAM's faultiness may causing the WindowsRE booting to fail. Would this be a valid assumption? If I'm not mistaken, booting from external media utilizes the RAM, so such a reason is plausible, assuming my knowledge of bootloading is correct. Other than that, I can't figure out any reason why all the bootable utilities except WindowsRE run fine. Does anyone know what the problem is, or could be? Any solutions?

    Read the article

  • Best tool for monitoring backups, etc. and trending statstics from that data

    - by Randy Syring
    I have done some research on nagios, opennms, and zenoss but am not confident that I have found what I am looking for. The main driving force for me right now is being able to monitor backups. This includes mysql, mssql, and eventually some file system backups. We have a tool that wraps the backup process for these different systems and collects statistics. So, items like: number of databases backed up size of db backup file size of db backup file compressed time to make backup time to zip file I want to be able to A) have notifications if the jobs are not run according to schedule B) be able to set thresholds on the statistics which would trigger notifications C) I want to be able to trend and graph the statistics I am planning on sending this information to the monitoring application through an HTTP POST. Or, the monitoring application could pull it from a log file as well. However, we will have other processes with other "arbitrary" (from the monitoring system's perspective) statics that will want to monitor and trend, so flexibility is very important. The tool or tools should also be able to do general monitoring and trending of network interfaces, server load, etc. Once we get the backup monitoring in place, we will want to include those items as well. Thanks.

    Read the article

  • How does one make sure or even guarantee server time are sync correctly between dozens of servers across multiple datacenter on different location?

    - by forestclown
    Currently our web applications contain a logic to check if the data sent to the web server is expired or not by comparing the timestamp of the data with the date/time of the server. Everything goes will, until some dude from data center accidentally modify one of the web server date/time and causes some disruptions in our web services. My managers are of course not happy with this, and said we shouldn't use timestamp to check expiry in the first place...anyway.... Network Time Protocol is implemented, because of data centers are spread across different continents so we have one NTP server in each data center. The servers within the data center will have cron jobs to check against the time with their NTP server from the same data center. If time is out of sync it will auto update the server date/time. But then with our managers not happy with it, and think it could still easily causes the same problem. e.g. what if someone accidentally modify the NTP date/time? what if all the NTP servers are out of sync with each other? which NTP servers we can really trust? and blah blah.. So my questions are: What are the current practice to sync date/time between servers across multiple data centers or locations? How does one manages time stamp between web apps? e.g. Server A send data (contain timestamp of Server A) to Server B (compare timestamp between Server B and the timestamp from the data to see if it has expired or not. This is to avoid HTTP replay) Should we really not use timestamp check? Thanks & Best Regards

    Read the article

  • Does the Fast Video Download addon share my data?

    - by Frost Shadow
    I've installed the Fast Video Download version 3.0.8 addon for Firefox to download flash videos, like from youtube. What I'm wondering is, how does the addon download it, and do other people see that i'm downloading the videos? For example, is all the software to download the video already on my computer, or does the addon contact someone else to get the video, or let them know? Can the webpage's administrator see I'm downloading the video?

    Read the article

  • Does PHP *have* to serialize/unserialize session data between each HTTP request? Or is there a sett

    - by Pete Alvin
    I think I understand why sessions are evil but for snappy client user experience I don't want to have to re-query the database on each HTTP request. (As a comparision, Java servlets can effortlessly keep tons of session objects in memory.) Can PHP be set to do this or does it have to serialize because it runs from CGI/FastCGI and therefore by definition is a new process each time a request comes in? I will be running PHP using LAMP.

    Read the article

  • Homebrew large data cluster access for 2 user levels?

    - by Yegor
    The title probably makes little sense, so here is an example. I have a file hosting site, that serves a large amount of semi-randomly accessed files. The setup is as follows: High horsepower front-end +DB server that also does encoding for files that need encoding Fresh file server, which stores newly uploaded content, thats probably (and usually) rapidly accessible, which has 500GB of raided SSD storage, that can push over 3GBit of traffic. 3 cheap node servers, containing 2 x 750GB SATA drives in raid1, where files older than 2 weeks are archived, from the SSD server (mentioned above). Files on each server are accessed via subdomains (via modsec) in a straight forward fashion (server1.domain.com, server2.domain.com, etc) Where I have the problem is this. I introduced a "premium" service where people pay a small fee every month, and get ad-free, quick accesses to stuff on the site. Once they are logged in, they access same files via premium.server1.domain.com via a different modsec script, with a different pass phrase. That all works fine and dandy.... except the cheap node servers are all IO bound, so accessing the files on them via a different, unsaturated network makes no difference, since it cannot read off the drive fast enough. What would be a good way to make files on the site be accessible via 2 different network routes, 1 of which will be saturated (the "free network") while all other files are on an un-saturated "premium" network?

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >