Search Results

Search found 478 results on 20 pages for 'warehouse'.

Page 16/20 | < Previous Page | 12 13 14 15 16 17 18 19 20  | Next Page >

  • Issues with duplicating TFS 2005 to a virtual server

    - by Hitchhiker
    We have a problem with our current TFS installation. For some reason, which I won't get into, the Sharepoint DBs (sts_content, sts_config) were upgraded to WSS 3 (MOSS). So now, none of our team-project sites work, we have no access to our documents and can't create new team projects. We can still work with the version control, though. We wanted to "play" with the server and try to fix it, without affecting the users. So we used p2v for VmWare, to duplicate the server to a virtual one. We now continued and changed all of the relevant configuration to point to the new server, as explained in the MSDN article. The step of rebuilding the warehouse (with setupwarehouse) failed. Also, we can't access the VersionControl web service (ourserver:8080/VersionControl/v1.0/repository.asmx). We are seeing errors in the EventLog: TF53002: Unable to obtain registration data for application Build. TF53005: Unable to retrieve the Team Foundation Server installed UI culture. TF53002: Unable to obtain registration data for application VersionControl. TF30040: The database is not correctly configured. Contact your Team Foundation Server administrator. The solutions suggested in this blog post did not assist. So now we're kind of stuck. Any assistance will be appreciated.

    Read the article

  • Windows CE restores default configuration after restart (Motorola MC3190)

    - by jarek
    Good morning, everyone! I have a problem with Motorola MC3190 hand scanner running on Windows CE. I've got few of those to make a new program for some kind of warehouse. There is already installed program which have been used by the customers before, so I delete this one and instead I install my new software which I've just made. It is running very well, but when I take out the battery and leave the device for entire night without power supply, it restores the whole configuration, so the old program is back, the wireless configuration is back and... Yeah. The scanner is restored to the configuration which was running when I received it few weeks ago. What I want to do is to set the whole configuration of the scanner so after a long power-off my program and my configuration will be restored. I truly believe someone knows how to do it. The time is running out, and I believe that the customer will be kind of annoyed when he will change the battery and the program which he bought will be gone. ;-) Regards, Jarek

    Read the article

  • Some Can reach bidmail.com others can't.

    - by user69426
    On a windows 7 Professional machine in Chrome one of our Estimating assistants can't get to www.bidmail.com, however the other 3 can. On his machine I did nslookup then bidmail.com and it fails to find it. I then went to a machine that could reach bidmail and did nslookup. It can't find it. I was skeptical and thought maybe it was a cached page so I cleared the cache then went back to bidmail.com was able to get to the page, login, lookup a newly posted bid then download the file. Yet I can not look it up through nslookup and I can't ping it www.bidmail.com and I can't trace it. I remoted to our other warehouse which is set up as a workgroup and attempt to nslookup bidmail and that nslookup fail... and on that machine which has never been to bidmail before it was able to connect to the website! I am totally confused if I can't ping it and I can't use nslookup to get there how in the hell is Chrome getting to the page and how do I get this guy back on? Also while typing this I took a new laptop out of the box plugged it in with no updates and can get to bidmail! omg!

    Read the article

  • Virtual Machine Network Architecture, Isolating Public and Private Networks

    - by Mark
    I'm looking for some insight into best practices for network traffic isolation within a virtual environment, specifically under VMWARE ESXi. Currently I have (in testing) 1 hardware server running ESXi but i expect to expand this to multiple pieces of hardware. The current setup is as follows: 1 pfsense VM, this VM accepts all outside (WAN/internet) traffic and performs firewall/port forwarding/NAT functionality. I have multiple public IP addresses sent to the this VM that are used for access to individual servers (via per incoming IP port forwarding rules). This VM is attached to the private (virtual) network that all other VMs are on. It also manages a VPN link into the private network with some access restrictions. This isn't the perimeter firewall but rather the firewall for this virtual pool only. I have 3 VMs that communicate with each other, as well as have some public access requirements: 1 LAMP server running an eCommerce site, public internet accessible 1 accounting server, access via windows server 2008 RDS services for remote access by users 1 inventory/warehouse management server, VPN to client terminals in warehouses These servers constantly talk with each other for data synchronization. Currently all the servers are on the same subnet/virtual network and connected to the internet through the pfsense VM. The pfsense firewall uses port forwarding and NAT to allow outside access to the servers for services and for server access to the internet. My main question is this: Is there a security benefit to adding a second virtual network adapter to each server and controlling traffic such that all server to server communication is on one separate virtual network, while any access to the outside world is routed through the other network adapter, through the firewall, and on the the internet. This is the type of architecture i would use if these were all physical servers, but i'm unsure if the networks being virtual changes the way i should approach locking down this system. Thank you for any thoughts or direction to any appropriate literature.

    Read the article

  • File Sync Solution for Batch Processing (ETL)

    - by KenFar
    I'm looking for a slightly different kind of sync utility - not one designed to keep two directories identical, but rather one intended to keep files flowing from one host to another. The context is a data warehouse that currently has a custom-developed solution that moves 10,000 files a day, some of which are 1+ gbytes gzipped files, between linux servers via ssh. Files are produced by the extract process, then moved to the transform server where a transform daemon is waiting to pick them up. The same process happens between transform & load. Once the files are moved they are typically archived on the source for a week, and the downstream process likewise moves them to temp then archive as it consumes them. So, my requirements & desires: It is never used to refresh updated files - only used to deliver new files. Because it's delivering files to downstream processes - it needs to rename the file once done so that a partial file doesn't get picked up. In order to simplify recovery, it should keep a copy of the source files - but rename them or move them to another directory. If the transfer fails (network down, file system full, permissions, file locked, etc), then it should retry periodically - and never fail in a non-recoverable way, or a way that sends the file twice or never sends the file. Should be able to copy files to 2+ destinations. Should have a consolidated log so that it's easy to find problems Should have an optional checksum feature Any recommendations? Can Unison do this well?

    Read the article

  • Licensing SQL Server 2012 Reporting Services w/ SharePoint 2010

    - by Evan M.
    Here's my situation: I have 1 VM that is running SharePoint 2010 SP1. I have a different physical server that is running SQL Server 2008 R2 that hosts all the configuration and content database for SharePoint. Now, we want to start providing BI capabilities to our users with SharePoint and SQL Server. With it's new features, 2012 is the obvious way to go. To support this, I'm looking to build a new VM that will have SQL Server 2012 installed w/ Analysis services and SSIS, which will be the platform that gets our data from our Oracle databases, puts it in a warehouse hosted by the SQL 2012 instance, and is put into cubes. What's getting me about the platform is licensing for Reporting Services and PowerPivot. My plan was to install SSRS and PowerPivot on the current SharePoint server. But my understanding of the licensing means that instead of the new SQL server being licensed, I'd have to license both new server, and the SharePoint server. Conversely, I could install SharePoint onto the SQL server, and only have to get a second SP license, but then I'd have the added complexity of deploying a separated application server, and combines my data and application servers. Is my licensing understanding correct, or can I have SSRS and PowerPivot installed separately without incurring additional licensing costs?

    Read the article

  • Software disables itself when the PC is accessed via RDP

    - by blckgrffn
    We have a large, specialty printer that has vendor specific software that enables its use outside of it showing up simply as a printer in the Windows Control Panel. This software recognizes when we RDP into the machine and "disconnects" the PC from the printer within its proprietary control panel. All is well when an application like TeamViewer is used to access the machine. Ostensibly, the application is helping us be safe by "enforcing" that the machine used for the printer is a walk up workstation, or so the support folks informed me. If TeamViewer etc, fixes the issue, then what is the problem? We have many headless workstations in our warehouse attached to a variety of specialty machines, all used via RDP. We want/need to keep access to the machines the same for the sanity of our production staff. The meat of the question - how, specifically, might a machine know that it is being accessed via RDP (terminal services management???) and how might this be defeated without altering an application or driver. Of note, the system being used is a Windows 7 Pro machine hooked to the printer via USB. Thanks! Nat edit Is there any combination of /admin switches, etc. that will possibly fix this? Simply putting /admin did not.

    Read the article

  • Join 3 tables in 1 LINQ-EF

    - by user100161
    I have to fill warehouse table cOrders with program using Ado.NET EF. I have SQL command but i don't know how to do this with LINQ. static void Main(string[] args) { var SPcontex = new PI_NorthwindSPEntities(); var contex = new NorthwindEntities(); dCustomers dimenzijaCustomers = new dCustomers(); dDatum dimenzijaDatum = new dDatum(); ... CREATE TABLE PoslovnaInteligencija.dbo.cOrders( cOrdersID int PRIMARY KEY IDENTITY(1,1), OrderID int NOT NULL, dCustomersID int FOREIGN KEY REFERENCES PoslovnaInteligencija.dbo.dCustomers(dCustomersID), dEmployeesID int FOREIGN KEY REFERENCES PoslovnaInteligencija.dbo.dEmployees(dEmployeesID), OrderDateID int FOREIGN KEY REFERENCES PoslovnaInteligencija.dbo.dDatum(sifDatum), RequiredDateID int FOREIGN KEY REFERENCES PoslovnaInteligencija.dbo.dDatum(sifDatum), ShippedDateID int FOREIGN KEY REFERENCES PoslovnaInteligencija.dbo.dDatum(sifDatum), dShippersID int FOREIGN KEY REFERENCES PoslovnaInteligencija.dbo.dShippers(dShippersID), dShipID int FOREIGN KEY REFERENCES PoslovnaInteligencija.dbo.dShip(dShipID), Freight money, WaitingDay int ) INSERT INTO PoslovnaInteligencija.dbo.cOrders (OrderID, dCustomersID, dEmployeesID, OrderDateID, RequiredDateID, dShippersID, dShipID, Freight, ShippedDateID, WaitingDay) SELECT OrderID, dc.dCustomersID, de.dEmployeesID, orderD.sifDatum, requiredD.sifDatum, dShippersID, ds.dShipID, Freight, ShippedDateID=CASE WHEN (ShippedDate IS NULL) THEN -1 ELSE shippedD.sifDatum END, WaitingDay=CASE WHEN (shippedD.sifDatum - orderD.sifDatum) IS NULL THEN -1 ELSE shippedD.sifDatum - orderD.sifDatum END FROM PoslovnaInteligencija.dbo.dShippers AS s, PoslovnaInteligencija.dbo.dCustomers AS dc, PoslovnaInteligencija.dbo.dEmployees AS de, PoslovnaInteligencija.dbo.dShip AS ds,PoslovnaInteligencija.dbo.dDatum AS orderD, PoslovnaInteligencija.dbo.dDatum AS requiredD, PoslovnaInteligencija.dbo.Orders AS o LEFT OUTER JOIN PoslovnaInteligencija.dbo.dDatum AS shippedD ON shippedD.datum=DATEADD(dd, 0, DATEDIFF(dd, 0, o.ShippedDate)) WHERE o.ShipVia=s.ShipperID AND dc.CustomerID=o.CustomerID AND de.EmployeeID=o.EmployeeID AND ds.ShipName=o.ShipName AND orderD.datum=DATEADD(dd, 0, DATEDIFF(dd, 0, o.OrderDate)) AND requiredD.datum=DATEADD(dd, 0, DATEDIFF(dd, 0, o.RequiredDate));

    Read the article

  • Business Logic Layer Pattern on Rails? MVCL

    - by Fabiano PS
    That is a broad question, and I appreciate no short/dumb asnwers like: "Oh that is the model job, this quest is retarded (period)" PROBLEM Where I work at people created a system over 2 years for managing the manufacture process over demand in the most simplified still broad as possible, involving selling, buying, assemble, The system is coded over Ruby On Rails. The app has been changed lots of times and the result is a mess on callbacks (some are called several times), 200+ models, and fat controllers: Total bad. The QUESTION is, if there is a gem, or pattern designed to handle Rails large app logic? The logic whould be able to fully talk to models (whose only concern would be data format handling and validation) What I EXPECT is to reduce complexity from various controllers, and hard to track callbacks into files with the responsibility to handle a business operation logic. In some cases there is the need to wait for a response, in others, only validation of the input is enough and a bg process would take place. ie: -- Sell some products (need to wait the operation to finish) 1. Set a View able to get the products input 2. Controller gets the product list inputed by employee and call the logic Logic::ExecuteWithResponse('sell', 'products', :prods => @product_list_with_qtt, :when => @date, :employee => current_user() ) This Logic would handle buying order, assemble order, machine schedule, warehouse reservation, and others. Have in mind that a callback on SalesOrder is not enough, since it depends on where it is called (no field for that), depends on the class of the user, among other stuff not visible for the model, or in some cases it would take long for the model to process.

    Read the article

  • Copy data from different worksheet to a master worksheet but no duplicates

    - by sam
    hi all, i want to clarify my initial question for a possible solutions from any savior out there. Say i have 3 excel sheets one for each user for data entry located in separate workbooks to avoid excel share workbook problems. I also have a master sheet in another workbook where i want individual data enter on those sheets precisely sheets 1 should copy to the next available row of sheet 1 in the master sheet as the users enter them. i need a vba code that can copy each record without copying a duplicate row in the master sheet but highlight the duplicate row and lookup the initial record in a master sheet and return the name of the Imputer looking up the row say column ( I) where each user sign there initials after every row of entry like below. All 4 worksheets are formatted as below: lastname account cardno. type tran amount date location comments initials JAME 65478923 1975 cash 500 4/10/2010 miles st. this acct is resolve MLK BEN 52436745 1880 CHECK 400 4/12/2010 CAREY ST Ongoing investigation MLK JAME 65478923 1975 cash 500 4/10/2010 miles st. this acct is resolve MLK I need the vba to recognize duplicates only if the account number and the card number matches the initial records. So if the duplicates exist a pop up message should be display that a duplicates exist and return the initial Imputer say MLK in COLUMN( I) to any user inputting in the individual worksheets other than warehouse sheets (master). So please any idea will be appreciated.

    Read the article

  • Business Layer Pattern on Rails? MVCL

    - by Fabiano PS
    That is a broad question, and I appreciate no short/dumb asnwers like: "Oh that is the model job, this quest is retarded (period)" PROBLEM Where I work at people created a system over 2 years for managing the manufacture process over demand in the most simplified still broad as possible, involving selling, buying, assemble, The system is coded over Ruby On Rails. The result has been changed lots of times and the result is a mess on callbacks (some are called several times), 200+ models, and fat controllers: Total bad. The QUESTION is, if there is a gem, or pattern designed to handle Rails large app logic? The logic whould be able to fully talk to models (whose only concern would be data format handling and validation) What I EXPECT is to reduce complexity from various controllers, and hard to track callbacks into files with the responsibility to handle a business operation logic. In some cases there is the need to wait for a response, in others, only validation of the input is enough and a bg process would take place. ie: -- Sell some products (need to wait the operation to finish) 1. Set a View able to get the products input 2. Controller gets the product list inputed by employee and call the logic Logic::ExecuteWithResponse('sell', 'products', :prods => @product_list_with_qtt, :when => @date, :employee => current_user() ) This Logic would handle buying order, assemble order, machine schedule, warehouse reservation, and others

    Read the article

  • Empty data problem - data layer or DAL?

    - by luckyluke
    I designing the new App now and giving the following question a lot of thought. I consume a lot of data from the warehouse, and the entities have a lot of dictionary based values (currency, country, tax-whatever data) - dimensions. I cannot be assured though that there won't be nulls. So I am thinking: create an empty value in each of teh dictionaries with special keyID - ie. -1 do the ETL (ssis) do the correct stuff and insert -1 where it needs to let the DAL know that -1 is special (Static const whatever thing) don't care in the code to check for nullness of dictionary entries because THEY will always have a value But maybe I should be thinking: import data AS IS let the DAL do the thinking using empty record Pattern still don't care in the code because business layer will have what it needs from DAL. I think is more of a approach thing but maybe i am missing something important here... What do You think? Am i clear? Please don't confuse it with empty record problem. I do use emptyCustomer think all the time and other defaults too.

    Read the article

  • is there an equivalent of a trigger for general stored procedure execution on sql server

    - by Arj
    Hi All, Hope you can help. Is there a way to detect when a stored proc is being run on SQL Server without altering the SP itself? Here's the requirement. We need to track users running reports from our enterprise data warehouse as the core product we use doesn't allow for this. Both core product reports and a slew of in-house ones we've added all return their data from individual stored procs. We don't have a practical way of altering the parts of the product webpages where reports are called from. We also can't change the stored procs for the core product reports. (It would be trivial to add a logging line to the start/end of each of our inhouse ones). What I'm trying to find therefore, is whether there's a way in SQL Server (2005 / 2008) to execute a logging stored proc whenever any other stored procedure runs, without altering those stored procedures themselves. We have general control over the SQL Server instance itself as it's local, we just don't want to change the product stored procs themselves. Any one have any ideas? Is there a kind of "stored proc executing trigger"? Is there an event model for SQL Server that we can hook custom .Net code into? (Just to discount it from the start, we want to try and make a change to SQL Server rather than get into capturing the report being run from the products webpages etc) Thoughts appreciated Thanks

    Read the article

  • Calculate differences between rows while grouping with SQL

    - by Guido
    I have a postgresql table containing movements of different items (models) between warehouses. For example, the following record means that 5 units of model 1 have been sent form warehouse 1 to 2: source target model units ------ ------ ----- ----- 1 2 1 5 I am trying to build a SQL query to obtain the difference between units sent and received, grouped by models. Again with an example: source target model units ------ ------ ----- ----- 1 2 1 5 -- 5 sent from 1 to 2 1 2 2 1 2 1 1 2 -- 2 sent from 2 to 1 2 1 1 1 -- 1 more sent from 2 to 1 The result should be: source target model diff ------ ------ ----- ---- 1 2 1 2 -- 5 sent minus 3 received 1 2 2 1 I wonder if this is possible with a single SQL query. Here is the table creation script and some data, just in case anyone wants to try it: CREATE TEMP TABLE movements ( source INTEGER, target INTEGER, model INTEGER, units INTEGER ); insert into movements values (1,2,1,5); insert into movements values (1,2,2,1); insert into movements values (2,1,1,2); insert into movements values (2,1,1,1);

    Read the article

  • Logging *Business* Events - use logging framework?

    - by UpTheCreek
    Hi, Something here doesn't feel right to me here, and so I would like the community's input - perhaps I am approaching this in the wrong way.... Q: Is is appropriate to use traditional infrastructure logging frameworks (like log4net) to log business events? When I say business events, I mean I want a global log like this: xx:xx Customer A purchased widget B. xx:xx Widget B was dispatched from warehouse. xx:xx Customer B payment declined. Most traditional infrastructure logging frameworks have event levels something like this: FATAL ERROR WARN INFO DEBUG An of course these messages don't fit well into that. Best description would be INFO, but of course these are important events, and INFO is of very low importance. I would still like this as a 'log' (e.g. I don't want to have to extract this from my business objects each time I want to see it) Seems to me I have two options: 1) Use a framework like log4net and just define a special logger for this (and live with the fact that it doesn't feel right). 2) Provide a service for performing this that doesn't rely on a traditional logging services. I'm leaning towards 2. What has anyone else done in a similar situations? Thanks!

    Read the article

  • Excessive httpd processes to stack up on my Rails + Apache2 + Passenger production setup?

    - by LeoAlmighty
    I have a Rails + Apache2 + Postgres + Passenger application running in production mode in OSX Snow Leopard. The application serves as a data warehouse for another application in the cloud so I'm constantly getting API calls to my OSX production build. After a recent reboot, I'm finding a ton of httpd processes stacking up and eventually requiring an apache reboot. I haven't changed any settings, everything was running fine before. Any ideas on the best way to troubleshoot this? $ ps -ef|grep httpd 0 6203 1 0 0:00.20 ?? 0:00.47 /usr/sbin/httpd -D FOREGROUND 70 6222 6203 0 0:00.05 ?? 0:00.11 /usr/sbin/httpd -D FOREGROUND 70 6224 6203 0 0:00.31 ?? 0:00.50 /usr/sbin/httpd -D FOREGROUND 70 6233 6203 0 0:00.05 ?? 0:00.10 /usr/sbin/httpd -D FOREGROUND 70 6234 6203 0 0:00.43 ?? 0:00.64 /usr/sbin/httpd -D FOREGROUND 70 6243 6203 0 0:00.02 ?? 0:00.03 /usr/sbin/httpd -D FOREGROUND 70 6319 6203 0 0:00.08 ?? 0:00.16 /usr/sbin/httpd -D FOREGROUND 70 6334 6203 0 0:00.02 ?? 0:00.05 /usr/sbin/httpd -D FOREGROUND 70 6469 6203 0 0:00.04 ?? 0:00.08 /usr/sbin/httpd -D FOREGROUND 70 6487 6203 0 0:00.36 ?? 0:00.48 /usr/sbin/httpd -D FOREGROUND 70 6593 6203 0 0:00.36 ?? 0:00.48 /usr/sbin/httpd -D FOREGROUND 70 6709 6203 0 0:00.04 ?? 0:00.08 /usr/sbin/httpd -D FOREGROUND 70 6718 6203 0 0:00.04 ?? 0:00.10 /usr/sbin/httpd -D FOREGROUND 70 6834 6203 0 0:00.01 ?? 0:00.03 /usr/sbin/httpd -D FOREGROUND 70 6852 6203 0 0:00.00 ?? 0:00.00 /usr/sbin/httpd -D FOREGROUND 70 6853 6203 0 0:00.01 ?? 0:00.02 /usr/sbin/httpd -D FOREGROUND

    Read the article

  • Using hashing to group similar records

    - by Neil Dobson
    I work for a fulfillment company and we have to pack and ship many orders from our warehouse to customers. To improve efficiency we would like to group identical orders and pack these in the most optimum way. By identical I mean having the same number of order lines containing the same SKUs and same order quantities. To achieve this I was thinking about hashing each order. We can then group by hash to quickly see which orders are the same. We are moving from an Access database to a PostgreSQL database and we have .NET based systems for data loading and general order processing systems, so we can either do the hashing during the data loading or hand this task over to the DB. My question firstly is should the hashing be managed by DB, possibly using triggers, or should the hash be created on-the-fly using a view or something? And secondly would it be best to calculate a hash for each order line and then to combine these to find an order-level hash for grouping, or should I just use a trigger for all CRUD operations on the order lines table which re-calculates a single hash for the entire order and store the value in the orders table? TIA

    Read the article

  • Question about Reporting and Data Warehousing Software bundled with SQL Server 2005

    - by anonymous user
    We currently use SQL Server 2005 Enterprise for our fairly large application, that has its roots in pre SQL Server 7.0. The tables are normalized and designed mainly for the application. The developers for the most part have the legacy SQL Server mindset. Only using the part of TSQL that existed back in 7.0, not using any of the new features of tsql or that are bundled with 2005. We're currently trying to build on demand reports using some crappy third party software, and will eventually try to build a data warehouse using more of the same crappy third party software (name removed to protect the guilty, don't ask I will not tell). The rationale for this was that we didn't want to spend more money to buy this additional software from Microsoft (this was not my decision, I had no input, but is my problem now). But from what I can tell is that Enterprise includes all of these tools, or am I missing something? What comes bundled with SQL Server 2005 Enterprise as far as reporting and data warehousing? Will we need to purchase anything else? is there actually anything else that can be purchased from Microsoft in this regard?

    Read the article

  • SQL-Server: Is there an equivalent of a trigger for general stored procedure execution

    - by Arj
    Hi All, Hope you can help. Is there a way to reliably detect when a stored proc is being run on SQL Server without altering the SP itself? Here's the requirement. We need to track users running reports from our enterprise data warehouse as the core product we use doesn't allow for this. Both core product reports and a slew of in-house ones we've added all return their data from individual stored procs. We don't have a practical way of altering the parts of the product webpages where reports are called from. We also can't change the stored procs for the core product reports. (It would be trivial to add a logging line to the start/end of each of our inhouse ones). What I'm trying to find therefore, is whether there's a way in SQL Server (2005 / 2008) to execute a logging stored proc whenever any other stored procedure runs, without altering those stored procedures themselves. We have general control over the SQL Server instance itself as it's local, we just don't want to change the product stored procs themselves. Any one have any ideas? Is there a kind of "stored proc executing trigger"? Is there an event model for SQL Server that we can hook custom .Net code into? (Just to discount it from the start, we want to try and make a change to SQL Server rather than get into capturing the report being run from the products webpages etc) Thoughts appreciated Thanks

    Read the article

  • Rescuing a failed WCF call

    - by illdev
    Hello, I am happily using Castle's WcfFacility. From Monorail I know the handy concept of Rescues - consumer friendly results that often, but not necessarily, contain Data about what went wrong. I am creating a Silverlight application right now, doing quite a few WCF service calls. All these request return an implementation of public class ServiceResponse { private string _messageToUser = string.Empty; private ActionResult _result = ActionResult.Success; public ActionResult Result // Success, Failure, Timeout { get { return _result; } set { _result = value; } } public string MessageToUser { get { return _messageToUser; } set { _messageToUser = value; } } } public abstract class ServiceResponse<TResponseData> : ServiceResponse { public TResponseData Data { get; set; } } If the service has trouble responding the right way, I would want the thrown Exception to be intercepted and converted to the expected implementation. base on the thrown exception, I would want to pass on a nice message. here is how one of the service methods looks like: [Transaction(TransactionMode.Requires)] public virtual SaveResponse InsertOrUpdate(WarehouseDto dto) { var w = dto.Id > 0 ? _dao.GetById(dto.Id) : new Warehouse(); w.Name = dto.Name; _dao.SaveOrUpdate(w); return new SaveResponse { Data = new InsertData { Id = w.Id } }; } I need the thrown Exception for the Transaction to be rolled back, so i cannot actually catch it and return something else. Any ideas, where I could hook in?

    Read the article

  • Is it possible to filter data used by pivot table based on filtering the rows in a source table in Excel?

    - by Geoffrey Stoel
    I have developed a dashboard in Excel 2007 that uses one source table in a sheet (being filled with a query on our data warehouse) and multiple pivot tables making different cross sections on this data. I use the GETPIVOTDATA in almost a hundred formulas to give me the right value for a specific indicator in my dashboard. This all works fine. However I now have received the question to make the dashboard for 5 different segments. As you can imagine I don't want to create 5 different workbooks for this and need to maintain the dashboard logic on all of them. So my question is the following. Is it possible to automatically (through VBA or any other means) filter the results in my source table which is the source for my pivot tables and thus for my dashboard values. So schematically: DATABASE_VIEW -- SOURCE_TABLE -- 12 pivot tables -- 100 GETPIVOTDATA functions Preferably I would like to load all the segments in the source_table (one view on my database) and then filter the data in the source table, which results in filterd source_dat for my pivots. This way I can (without requerying the db) quickly change between segments in the dashboards (refreshing pivots only). Data in the source table has the column: CUSTOMER_SEGMENT available to filter upon. Any help is appreciated. Geoffrey

    Read the article

  • help on developing enterprise level software solutions

    - by wefwgeweg
    there is a specific niche which I would like to target by providing a complete enterprise level software solution.... the problem is, where do i begin ? meaning, i come from writing just desktop software on VB/ASP .net/PHP/mysql and suddenly unfamiliar terms popup like Oracle, SAP Business Information Warehouse, J2EE.... obviously, something is pointing towards Java, is it common for software suites, or solutions to be developed 100% on Java technology and standards? Are there any other platform to build enterprise level software on ? i am still lacking understanding what exactly is "Enterprise level" ? what is sufficient condition to call a software that sells for $199 and then suddenly it's $19,999 for "enterprise" package. I dont understand why there is such a huge discrepancy between "standard" and "enterprise" versions of software. Is it just attempting to bag large corporations on a spending spree ? so why does one choose to develop so called "enterprise" softwares ? is it because of the large inflated price tag you can justify with ? i would also like some more enterpreneural resources on starting your own enterprise software company in a niche.... Thank you for reading, i am still trying to find the right questions.

    Read the article

  • Google Sites API - File Cabinets: Spaces and extension separator (.) are removed from file names

    - by user1299447
    We have a series of internal reports that we update regularly from our internal databases. We built an application in C# that uploads these reports to a Google Site. Everything works fine, except that the name of the file shown to the final user in the File Cabinet does not include the original spaces nor the extension separator (.) For example, Stock per warehouse.pdf is shown as : Stockperwarehousepdf Below is a simplified version of the code. private AtomEntry UploadAttachment(string filename, AtomEntry parent, string title, string description) { SiteEntry entry = new SiteEntry(); AtomCategory category = new AtomCategory(SitesService.ATTACHMENT_TERM, SitesService.KIND_SCHEME); category.Label = "attachment"; entry.Categories.Add(category); AtomLink parentLink = new AtomLink(AtomLink.ATOM_TYPE, SitesService.PARENT_REL); parentLink.HRef = parent.SelfUri; entry.Links.Add(parentLink); entry.MediaSource = new MediaFileSource(filename, MediaFileSource.GetContentTypeForFileName(filename)); entry.Content.Type = MediaFileSource.GetContentTypeForFileName(filename); entry.Title.Text= title; entry.Summary.Text = description; AtomEntry newEntry = null; newEntry = service.Insert(new Uri(makeFeedUri("content")), entry); } The key line is where the MediaFileSource object is created. Any idea of what we are missing? I've tried all sort of changes :(

    Read the article

  • DB Strategy for inserting into a high read table (Sql Server)

    - by Tom
    Looking for strategies for a very large table with data maintained for reporting and historical purposes, a very small subset of that data is used in daily operations. Background: We have Visitor and Visits tables which are continuously updated by our consumer facing site. These tables contain information on every visit and visitor, including bots and crawlers, direct traffic that does not result in a conversion, etc. Our back end site allows management of the visitor's (leads) from the front end site. Most of the management occurs on a small subset of our visitors (visitors that become leads). The vast majority of the data in our visitor and visit tables is maintained only for a much smaller subset of user activity (basically reporting type functionality). This is NOT an indexing problem, we have done all we can with indexing and keeping our indexes clean, small, and not fragmented. ps: We do not currently have the budget or expertise for a data warehouse. The problem: We would like the system to be more responsive to our end users when they are querying, for instance, the list of their assigned leads. Currently the query is against a huge data set of mostly irrelevant data. I am pondering a few ideas. One involves new tables and a fairly major re-architecture, I'm not asking for help on that. The other involves creating redundant data, (for instance a Visitor_Archive and a Visitor_Small table) where the larger visitor and visit tables exist for inserts and history/reporting, the smaller visitor1 table would exist for managing leads, sending lead an email, need leads phone number, need my list of leads, etc.. The reason I am reaching out is that I would love opinions on the best way to keep the Visitor_Archive and the Visitor_Small tables in sync... Replication? Can I use replication to replicate only data with a certain column value (FooID = x) Any other strategies?

    Read the article

  • Query Logging in Analysis Services

    - by MikeD
    On a project I work on, we capture the queries that get executed on our Analysis Services instance (SQL Server 2008 R2) and use the table for helping us to build aggregations and also we aggregate the query log daily into a data warehouse of operational data so we can track usage of our Analysis databases by users over time. We've learned a couple of helpful things about this logging that I'd like to share here.First off, the query log table automatically gets cleaned out by SSAS under a few conditions - schema changes to the analysis database and even regular data and aggregation processing can delete rows in the table. We like to keep these logs longer than that, so we have a trigger on the table that copies all rows into another table with the same structure:Here is our trigger code:CREATE TRIGGER [dbo].[SaveQueryLog] on [dbo].[OlapQueryLog] AFTER INSERT AS       INSERT INTO dbo.[OlapQueryLog_History] (MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration)      SELECT MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration FROM inserted Second, the query logging process is "best effort" - if SSAS cannot connect to the database listed in the QueryLogConnectionString in the Analysis Server properties, it just stops logging - it doesn't generate any errors to the client at all, which is a good thing. Once it stops logging, it doesn't retry later - an hour, a day, a week, or even a month later, so long as the service doesn't restart.That has burned us a couple of times, when we have made changes to the service account that is used for SSAS, and that account doesn't have access to the database we want to log to. The last time this happened, we noticed a while later that no logging was taking place, and I determined that the service account didn't have sufficient permissions, so I made the necessary changes to give that service account access to the logging database. I first tried just the db_datawriter role and that wasn't enough, so I granted the service account membership in the db_owner role. Yes, that's a much bigger set of permissions, but I didn't want to search out the specific permissions at the time. Once I determined that the service account had the appropriate permissions, I wanted to get query logging restarted from SSAS, and I wondered how to do that? Having just used a larger hammer than necessary with the db_owner role membership, I considered just restarting SSAS to get it logging again. However, this was a production server, and it was in the middle of business hours, and there were active users connecting to that SSAS instance, so I thought better of it.As I considered the options, I remembered that the first time I set up query logging, by putting in a valid connection string to the QueryLogConnectionString server property, logging started immediately after I saved the properties. I wondered if I could make some other change to the connection string so that the query logging would start again without restarting the service. I went into the connection string dialog, went to the All page, and looked at the properties I could change that wouldn't affect the actual connection. Aha! The Application Name property would do just nicely - I set it to "SSAS Query Logging" (it was previously blank) and saved the changes to the server properties. And the query logging started up right away. If I need to get this running again in the future, I could just make a small change in the Application Name property again, save it, and even change it back again if I wanted to.The other nice side effect of setting the Application Name property is that now I can see (and possibly filter for or filter out) the SQL activity in that database that is related to the query logging process in Profiler:  To sum up:The SSAS Query Logging process will automatically delete rows from the QueryLog table, so if you want to keep them longer, put a trigger on the table to copy the rows to another tableThe SSAS service account requires more than db_datawriter role membership (and probably less than db_owner) in the database specified in the QueryLogConnectionString server property to successfully insert log rows to the QueryLog  table.Query logging will stop quietly whenever it encounters an error. Make a change to the QueryLogConnectionString server property (such as the Application Name attribute) to get query logging to restart and you won't have to restart the service.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20  | Next Page >