Search Results

Search found 22384 results on 896 pages for 'customer support'.

Page 289/896 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • Attach an entity that is not new, perhaps having been loaded from another DataContext. LINQ to SQL -

    - by soldieraman
    Alright How I got this error I got one application sitting on a server 2 users accessing this application - doing some bulk data processing . eg. entering values and then the application is working with another system to extract values for them and then saving. I can't recreate the error The error logs show: The error happend at the same time in both the application Both happend on a Attach/Submit (but two different functions) There is no way they are using the same DataContext object as I save the DataContext in the HttpContext.Items My hunch / guess is: One datacontext was not refreshed i.e. the an object was created for the same item twice as it was new in both the forms. eg. Customer Number - a customer was created (as one couldn't be found) by one datacontext - the other one couldn't find it either (i am using compiled queries to find it in the datacontext) so it created another object and on attaching failed. The HttpContext.Items lost its value somehow (i am using a virtual pc as server - maybe something went wrong there) I am going more of the second as I can't recreate the error - but it just might be a timing (for attach/save) thing - also the error makes me think of the 2nd too.

    Read the article

  • Architecture for database analytics

    - by David Cournapeau
    Hi, We have an architecture where we provide each customer Business Intelligence-like services for their website (internet merchant). Now, I need to analyze those data internally (for algorithmic improvement, performance tracking, etc...) and those are potentially quite heavy: we have up to millions of rows / customer / day, and I may want to know how many queries we had in the last month, weekly compared, etc... that is the order of billions entries if not more. The way it is currently done is quite standard: daily scripts which scan the databases, and generate big CSV files. I don't like this solutions for several reasons: as typical with those kinds of scripts, they fall into the write-once and never-touched-again category tracking things in "real-time" is necessary (we have separate toolset to query the last few hours ATM). this is slow and non-"agile" Although I have some experience in dealing with huge datasets for scientific usage, I am a complete beginner as far as traditional RDBM go. It seems that using column-oriented database for analytics could be a solution (the analytics don't need most of the data we have in the app database), but I would like to know what other options are available for this kind of issues.

    Read the article

  • How to develop Online Shopping Portal Application using PHP ?

    - by Sarang
    I do not know PHP & I have to develop a Shopping Portal with following Definition : Scenario: Online Shopping Portal XYZ.com wants to create an online shopping portal for managing its registered customers and their shopping. The customers need to register themselves first before they do shopping using the shopping portal. However, everyone, whether registered or not, can view the various products along with the prices listed in the portal. The registered customers, after logging in, are allowed to place order for one or more products from the products listed in the portal. Once the order is placed, the customer gets a reference order number and the order status should be “order in process”. The customers can track their order using the given reference number. The management of XYZ.com should be able to modify the order status of a particular reference order number to “shipped” once the products are shipped to the shipping address entered by the customer at the time of placing the order. The Functionalities required are : Create the interface for the XYZ.com shopping portal using HTML/XHTML and CSS. Implement the client side validations using JavaScript. Create the tables using MySQL. Implement the functionality using the server side scripting language, PHP. Integrate all the above tasks and make the XYZ.com shopping portal functional. How do I develop this application with following proper steps of development ?

    Read the article

  • AppFabric caching's local cache isnt working for us... What are we doing wrong?

    - by Olly
    We are using appfabric as the 2ndlevel cache for an NHibernate asp.net application comprising a customer facing website and an admin website. They are both connected to the same cache so when admin updates something, the customer facing site is updated. It seems to be working OK - we have a CacheCLuster on a seperate server and all is well but we want to enable localcache to get better performance, however, it dosnt seem to be working. We have enabled it like this... bool UseLocalCache = int LocalCacheObjectCount = int.MaxValue; TimeSpan LocalCacheDefaultTimeout = TimeSpan.FromMinutes(3); DataCacheLocalCacheInvalidationPolicy LocalCacheInvalidationPolicy = DataCacheLocalCacheInvalidationPolicy.TimeoutBased; if (UseLocalCache) { configuration.LocalCacheProperties = new DataCacheLocalCacheProperties( LocalCacheObjectCount, LocalCacheDefaultTimeout, LocalCacheInvalidationPolicy ); // configuration.NotificationProperties = new DataCacheNotificationProperties(500, TimeSpan.FromSeconds(300)); } Initially we tried using a timeout invalidation policy (3mins) and our app felt like it was running faster. HOWEVER, we noticed that if we changed something in the admin site, it was immediatley updated in the live site. As we are using timeouts not notifications, this demonstrates that the local cache isnt being queried (or is, but is always missing). The cache.GetType().Name returns "LocalCache" - so the factory has made a local cache. Running "Get-Cache-Statistics MyCache" in PS on my dev environment (asp.net app running local from vs2008, cache cluster running on a seperate w2k8 machine) show a handful of Request Counts. However, on the Production environment, the Request Count increases dramaticaly. We tried following the method here to se the cache cliebt-server traffic... http://blogs.msdn.com/b/appfabriccat/archive/2010/09/20/appfabric-cache-peeking-into-client-amp-server-wcf-communication.aspx but the log file had nothing but the initial header in it - i.e no loggin either. I cant find anything in SO or Google. Have we done something wrong? Have we got a screwy install of AppFabric - we installed it via WebPlatform Installer - I think? (note: the IIS box running ASp.net isnt in yhe cluster - it is just the client). Any insights greatfully received!

    Read the article

  • Creating a Mysql view to SELECT coloumns from different tables

    - by user330429
    I need help in constructing a VIEW on 4 tables. The view should contain coloumns: ER.ID, ER.EMPID, ER.CUSTID, ER.STATUS, ER.DATEREPORTED, ER.REPORT, EB.NAME, CR.CUSTNAME, CR.LOCID, CL.LOCNAME, DI.DEPTNAME ALIASES EMP_REPORT ER , EMP_BIO EB, CUST_RECORD CR, CUST_LOC CL, DEPT_ID DI THE DATA MODELS ARE: describe EMP_REPORT; +--------------+-------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +--------------+-------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | empid | int(11) | NO | | NULL | | | custid | int(11) | NO | | NULL | | | status | varchar(32) | NO | | NULL | | | datereported | bigint(20) | NO | | NULL | | | report | text | YES | | NULL | | +--------------+-------------+------+-----+---------+----------------+ describe EMP_BIO; +--------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------+-------------+------+-----+---------+-------+ | empid | int(11) | NO | PRI | NULL | | | name | varchar(56) | NO | | NULL | | | sex | char(1) | NO | | NULL | | | deptid | int(11) | NO | | NULL | | | email | varchar(32) | NO | | NULL | | | mobile | bigint(20) | YES | | NULL | | | gtlk | varchar(32) | YES | | NULL | | | skype | varchar(32) | YES | | NULL | | | cvid | int(11) | YES | | NULL | | +--------+-------------+------+-----+---------+-------+ describe CUST_RECORD; +----------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------+--------------+------+-----+---------+----------------+ | custid | int(11) | NO | PRI | NULL | auto_increment | | custname | varchar(32) | NO | | NULL | | | address | varchar(255) | YES | | NULL | | | contactp | varchar(32) | YES | | NULL | | | mobile | bigint(20) | YES | | NULL | | | locid | int(11) | NO | | NULL | | | remarks | text | YES | | NULL | | | date | int(11) | YES | | NULL | | | addedby | int(11) | YES | | NULL | | +----------+--------------+------+-----+---------+----------------+ describe CUST_LOC; +---------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +---------+-------------+------+-----+---------+-------+ | locid | int(11) | NO | PRI | 0 | | | locname | varchar(32) | NO | | NULL | | +---------+-------------+------+-----+---------+-------+ describe DEPT_ID; +----------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +----------+-------------+------+-----+---------+-------+ | deptid | int(11) | NO | | NULL | | | deptname | varchar(32) | YES | | NULL | | +----------+-------------+------+-----+---------+-------+ The table EMP_REPORT contains reports submitted by employees, all the coloumns in it needs to be fetched. The empid in this table should be used to fetch corresponding name in EMP_BIO (employee biodata) table. The custid in EMP_REPORT should be used to fetch corresponding locid in CUST_RECORD(customer record) which is used to fetch locname in CUST_LOC(customer location) table. The empid in EMP_REPORT is used to fetch corresponding deptid in EMP_BIO table which is then used to fetch corresponding deptname from DEPT_ID(department id) table. I tried constructing view using union of different select queries, but dint get proper results. Please help me. Thanks in advance. PS: sorry for my poor english

    Read the article

  • Java "Pool" of longs or Oracle sequence with reusable values

    - by Anthony Accioly
    Several months ago I implemented a solution to choose unique values from a range between 1 and 65535 (16 bits). This range is used to generate unique Route Targets suffixes, which for this customer massive network (it's a huge ISP) are a very disputed resource, so any free index needs to become immediately available to the end user. To tackle this requirement I used a BitSet. Allocate on the RT index with set and deallocate a suffix with clear. The method nextClearBit() can find the next available index. I handle synchronization / concurrency issues manually. This works pretty well for a small range... The entire index is small (around 10k), it is blazing fast and can be easy serialized into a Blob field. The problem is, some new devices can handle RTs of 32 bits (range 1 / 4294967296). Which can't be managed with a BitSet (it would, by itself, consume around 600Mb, plus be limited to int range). Even with this massive range available, the client still wants to free available Route Targets for the end user, mainly because the lowest ones (up to 65535) - which are compatible with old routers - are being heavily disputed. Before I tell the customer that this is impossible and he will have to conform with my reusable index for lower RTs (up to 65550) and use a database sequence for the other ones (which means that when the user frees a Route Target, it will not become available again). Would anyone shed some light? Maybe some kind soul already implemented a high performance number pool for Java (6 if it matters), or I am missing a killer feature of Oracle database (11R2 if it matters)... Wishful thinking. Thank you very much in advance.

    Read the article

  • Working with fields which can mutate or be new instances altogether

    - by dotnetdev
    Structs are usually used for immutable data, eg a phone number, which does not mutate, but instead you get a new one (eg the number 000 becoming 0001 would mean two seperate numbers). However, pieces of information like Name, a string, can either mutate (company abc changing its name to abcdef, or being given a new name like def). For fields like this, I assume they should reside in the mutable class and not an immutable structure? My way of structuring code is to have an immutable concept, like Address (any change is a new address completely), in a struct and then reference it from a class like Customer, since Customer always has an address. So I would put CompanyName, or Employer, in the class as it is mutable. But a name can either mutate and so be the same 1 instance, or a new name setup and while the company still owning the first name too. Would the correct pattern for assigning a new instance (eg a new company name but the old name still owned by the company) be?: string name = ""; string newName = new string(); newName = "new"; name = newName; And a mutation just the standard assignment pattern? Thanks

    Read the article

  • Using Maven for maintaining product documentation

    - by Waldheinz
    We are using Maven for building a Java server-style application. It consists of several modules (in a single Maven "reactor") which can be plugged together to generate a final product (essentially a .jar) with the features enabled that the customer needs. All the internals are documented using JavaDoc and all, but that's not what you can give to the customer to find out how to get the thing running. Currently we have an OpenOffice document which serves as end-user documentation. I'd like to integrate this documentation into the Maven build process where each module's documentation is maintained (hand-edited) together with the Module's sources and the final document can reference the required Module documentation sections, add some friendly foreword and, if possible at all, can reference into the JavaDocs. Ultimately, the document should be output as a PDF. Is there any experience on Maven plugins can help with this? Is DocBook the right tool? Maybe Latex? Or something completely different? A sound "stick with OpenOffice and some text blocks" could be an answer, too.

    Read the article

  • Iteration over a linq to sql query is very slow.

    - by devzero
    I have a view, AdvertView in my database, this view is a simple join between some tables (advert, customer, properties). Then I have a simple linq query to fetch all adverts for a customer: public IEnumerable<AdvertView> GetAdvertForCustomerID(int customerID) { var advertList = from advert in _dbContext.AdvertViews where advert.Customer_ID.Equals(customerID) select advert; return advertList; } I then wish to map this to modelItems for my MVC application: public List<AdvertModelItem> GetAdvertsByCustomer(int customerId) { List<AdvertModelItem> lstAdverts = new List<AdvertModelItem>(); List<AdvertView> adViews = _dataHandler.GetAdvertForCustomerID(customerId).ToList(); foreach(AdvertView adView in adViews) { lstAdverts.Add(_advertMapper.MapToModelClass(adView)); } return lstAdverts; } I was expecting to have some performance issues with the SQL, but the problem seems to be with the .ToList() function. I'm using ANTS performance profiler and it reports that the total runtime of the function is 1.400ms, and 850 of those is with the ToList(). So my question is, why does the tolist function take such a long time here?

    Read the article

  • Create Session Variable from different datasources?

    - by Szafranamn
    Currently I am developing a dynamic website using Dreamweaver cs5 with ColdFusion 9 and using Access to create my databases along with QuickBooks and QODBC to create database. I have established a login session variable stemming from the login page. This session variable is being drawn from one Datasource "Access" Table "Logininfo" Field "FullName" but I wanted to create another session variable either at this point or further into the member's page to use in a query sequence. This session variable would stem from another Datasoucre "QBs" Table "Invoice" Field "CustomerRefFullName" which is generated through Quickbooks and QODBC. I am not sure if this is possible but if it is how do I do it. I want to do this so I can query the Invoice database to upload the customer's Invoices unique to them onto their page. So it would have to be related to their login credentials. If there is another better route to take I would greatly appreciate the advice. Below is the login code if there is additional information needed let me know. This is my current thinking/plan to do what I wish to intend hence the need to create the session variable: I have another Datasource "QBs" with a Table "Invoice" when I create another webpage for the customer to see their invoice I need to create a recordset that accesses that Table. In order to do so I think the best way would some home convert the session.FullName (which came from Access Datasource, Logininfor Table) into a session.CustomerRefFullName (which would have to come from (Datasource: QBs Table: Invoice Field: CustomerRefFullName) that way I could set the query WHERE CustomerRefFullName and have each logged in user see their specific Invoices. So is there a way to turn the session variable off one datasource/table into a different sessionvariable off a new datasource/table even if it is unique just to that page??? <cfif IsDefined("FORM.username")> SELECT FullName, Username,Password,AccessLevels FROM Logininfo WHERE Username= AND Password=

    Read the article

  • C# Type conversion between two similar Datatable objects

    - by Ali
    I have .NET project with sync framework and two separate Datasets for MS SQL and Compact SQL. in my base class I have a generic DataTable object. in my derived classed I assign Typed DataTable to the generic object based on whether the application is operating online or offline: example: if (online) _dataTable = new MSSQLDataSet.Customer; else _dataTable = new CompactSQLDataSet.Customer; Now every where in my code i have to check and do a cast based on the current network mode like this: public void changeCustomerID(int ID) { if (online) (MSSQLDataSet.CustomerDataTable)_dataTable)[i].CustomerID = value; else (CompactMSSQLDataSet.CustomerDataTable)_dataTable)[i].CustomerID = value; } but I don't think this is very efficient and I believe it can be done in a smarter way to only use one line of code by dynamically getting the Type of _dataTable on the run time. my problem is at the design time, in order to acess datatable porperties such as "CustomerID" it has to be casted to either MSSQLDataSet.CustomerDataTable or CompactMSSQLDataSet.CustomerDataTable. Is there a way to have a function or a operator to convert the _datatable to its runtime type but still be able to use it's design time properties which are the same between the two types? something like: ((aType)_dataTable)[i].CustomerID = value; //or GetRuntimeType(_dataTable)[i].CustomerID = value;

    Read the article

  • Using Mercurial in a Large Organization

    - by Kristopher Johnson
    I've been using Mercurial for my own personal projects for a while, and I love it. My employer is considering a switch from CVS to SVN, but I'm wondering whether I should push for Mercurial (or some other DVCS) instead. One wrinkle with Mercurial is that it seems to be designed around the idea of having a single repository per "project". In this organization, there are dozens of different executables, DLLs, and other components in the current CVS repository, hierarchically organized. There are a lot of generic reusable components, but also some customer-specific components, and customer-specific configurations. The current build procedures generally get some set of subtrees out of the CVS repository. If we move from CVS to Mercurial, what is the best way to organize the repository/repositories? Should we have one huge Mercurial repository containing everything? If not, how fine-grained should the smaller repositories be? I think people will find it very annoying if they have to pull and push updates from a lot of different places, but they will also find it annoying if they have to pull/push the entire company codebase. Anybody have experience with this, or advice?

    Read the article

  • Looking for an email template engine for end-users ...

    - by RizwanK
    We have a number of customers that we have to send monthly invoices too. Right now, I'm managing a codebase that does SQL queries against our customer database and billing database and places that data into emails - and sends it. I grow weary of maintaining this every time we want to include a new promotion or change our customer service phone numbers. So, I'm looking for a replacement to move more of this into the hands of those requesting the changes. In my ideal world, I need : A WYSIWYG (man, does anyone even say that anymore?) email editor that generates templates based upon the output from a Database Query. The ability to drag and drop various fields from the database query into the email template. Display of sample email results with the database query. Web application, preferably not requiring IIS. Involve as little code as possible for the end-user, but allow basic functionality (i.e. arrays/for loops) Either comes with it's own email delivery engine, or writes output in a way that I can easily write a Python script to deliver the email. Support for generic Database Connectors. (I need MSSQL and MySQL) F/OSS So ... can anyone suggest a project like this, or some tools that'd be useful for rolling my own? (My current alternative idea is using something like ERB or Tenjin, having them write the code, but not having live-preview for the editor would suck...)

    Read the article

  • ActiveModel::MassAssignmentSecurity::Error in CustomersController#create (attr_accessible is set)

    - by megabga
    In my controller, I've got error when create action and try create model [can't mass-assignment], but in my spec, my test of mass-assignment model its pass!?! My Model: class Customer < ActiveRecord::Base attr_accessible :doc, :doc_rg, :name, :birthday, :name_sec, :address, :state_id, :city_id, :district_id, :customer_pj, :is_customer, :segment_id, :activity_id, :person_type, :person_id belongs_to :person , :polymorphic => true, dependent: :destroy has_many :histories has_many :emails def self.search(search) if search conditions = [] conditions << ['name LIKE ?', "%#{search}%"] find(:all, :conditions => conditions) else find(:all) end end end I`ve tired set attr_accessible in controller too, in my randomized way. the Controller: class CustomersController < ApplicationController include ActiveModel::MassAssignmentSecurity attr_accessible :doc, :doc_rg, :name, :birthday, :name_sec, :address, :state_id, :city_id, :district_id, :customer_pj, :is_customer autocomplete :business_segment, :name, :full => true autocomplete :business_activity, :name, :full => true [...] end The test, my passed test describe "accessible attributes" do it "should allow access to basics fields" do expect do @customer.save end.should_not raise_error(ActiveModel::MassAssignmentSecurity::Error) end end The error: ActiveModel::MassAssignmentSecurity::Error in CustomersController#create Can't mass-assign protected attributes: doc, doc_rg, name_sec, address, state_id, city_id, district_id, customer_pj, is_customer https://github.com/megabga/crm 1.9.2p320 Rails 3.2 MacOS pg

    Read the article

  • application specific seed data population

    - by user339108
    Env: JBoss, (h2, MySQl, postgres), JPA, Hibernate 3.3.x @Id @GeneratedValue(strategy = IDENTITY) private Integer key; Currently our primary keys are created using the above annotation. We expect to support a large number of users (~million users), what key should be used. Should it be Integer or Long or should I use the unsigned versions of the above declarations. We have a j2ee application which needs to be populated with some seed data on installation. On purchase, the customer creates his own data on top of the application. We just want to make sure that there is enough room to ship, modify or add data for future releases. What would be the best mechanism to support this, we had looked at starting all table identifiers from a certain id (say 1000) but this mandates modifying primary key generation to have table or sequence based generators and we have around ~100 tables. We are not sure if this is the right strategy for this. If we use a signed integer approach for the key, would it make sense to have the seed data as everything starting from 0 and below (i.e -ve numbers), so that all customer specific data will be available on 0 and above (i.e. +ve numbers)

    Read the article

  • Avoid slowdowns while using off-site database

    - by Anders Holmström
    The basic layout of my problem is this: Website (ASP.NET/C#) hosted at a dedicated hosting company (location 1) Company database (SQL Server) with records of relevant data (location 2). Location 1 & 2 connected through VPN. Customer visiting the website and wanting to pull data from the company database. No possibility of changing the server locations or layout (i.e. moving the website to an in-office server isn't possible). What I want to do is figure out the best way to handle the data acces in this case, minimizing the need for time-expensive database calls over the VPN. The first idea I'm getting is this: When a user enters the section of the website needing the DB data, you pull all the needed tables from the database into a in-memory dataset. All subsequent views/updates to the data is done on this dataset. When the user leaves (logout, session timeout, browser closed etc) the dataset gets sent to the SQL server. I'm not sure if this is a realistic solution, and it obviously has some problems. If two web visitors are performing updates on the same data, the one finishing up last will have their changes overwriting the first ones. There's also no way of knowing you have the latest data (i.e. if a customer pulls som info on their projects and we update this info while they are viewing them, they won't see these changes PLUS the above overwriting issue will arise). The other solution would be to somehow aggregate database calls and make sure they only happen when you need them, e.g. during data updates but not during data views. But then again the longer a pause between these refreshing DB calls, the bigger a chance that the data view is out of date as per the problem described above. Any input on the above or some fresh ideas would be most welcome.

    Read the article

  • Is there any need for me to use wstring in the following case

    - by Yan Cheng CHEOK
    Currently, I am developing an app for a China customer. China customer are mostly switch to GB2312 language in their OS encoding. I need to write a text file, which will be encoded using GB2312. I use std::ofstream file I compile my application under MBCS mode, not unicode. I use the following code, to convert CString to std::string, and write it to file using ofstream std::string Utils::ToString(CString& cString) { /* Will not work correctly, if we are compiled under unicode mode. */ return (LPCTSTR)cString; } To my surprise. It just works. I thought I need to at least make use of wstring. I try to do some investigation. Here is the MBCS.txt generated. I try to print a single character named ? (its value is 0xBDC5) When I use CString to carry this character, its length is 2. When I use Utils::ToString to perform conversion to std::string, the returned string length is 2. I write to file using std::ofstream My question is : When I exam MBCS.txt using a hex editor, the value is displayed as BD (LSB) and C5 (MSB). But I am using little endian machine. Isn't hex editor should show me C5 (LSB) and BD (MSB)? I check from wikipedia. GB2312 seems doesn't specific endianness. It seems that using std::string + CString just work fine for my case. May I know in what case, the above methodology will not work? and when I should start to use wstring?

    Read the article

  • How can I introduce a regex action to match the first element in a Catalyst URI ?

    - by RET
    Background: I'm using a CRUD framework in Catalyst that auto-generates forms and lists for all tables in a given database. For example: /admin/list/person or /admin/add/person or /admin/edit/person/3 all dynamically generate pages or forms as appropriate for the table 'person'. (In other words, Admin.pm has actions edit, list, add, delete and so on that expect a table argument and possibly a row-identifying argument.) Question: In the particular application I'm building, the database will be used by multiple customers, so I want to introduce a URI scheme where the first element is the customer's identifier, followed by the administrative action/table etc: /cust1/admin/list/person /cust2/admin/add/person /cust2/admin/edit/person/3 This is for "branding" purposes, and also to ensure that bookmarks or URLs passed from one user to another do the expected thing. But I'm having a lot of trouble getting this to work. I would prefer not to have to modify the subs in the existing framework, so I've been trying variations on the following: sub customer : Regex('^(\w+)/(admin)$') { my ($self, $c, @args) = @_; #validation of captured arg snipped.. my $path = join('/', 'admin', @args); $c->request->path($path); $c->dispatcher->prepare_action($c); $c->forward($c->action, $c->req->args); } But it just will not behave. I've used regex matching actions many times, but putting one in the very first 'barrel' of a URI seems unusually traumatic. Any suggestions gratefully received.

    Read the article

  • Best solution for __autoload

    - by tpk
    As our PHP5 OO application grew (in both size and traffic), we decided to revisit the __autoload() strategy. We always name the file by the class definition it contains, so class Customer would be contained within Customer.php. We used to list the directories in which a file can potentially exist, until the right .php file was found. This is quite inefficient, because you're potentially going through a number of directories which you don't need to, and doing so on every request (thus, making loads of stat() calls). Solutions that come to my mind... -use a naming convention that dictates the directory name (similar to PEAR). Disadvantages: doesn't scale too great, resulting in horrible class names. -come up with some kind of pre-built array of the locations (propel does this for its __autoload). Disadvantage: requires a rebuild before any deploy of new code. -build the array "on the fly" and cache it. This seems to be the best solution, as it allows for any class names and directory structure you want, and is fully flexible in that new files just get added to the list. The concerns are: where to store it and what about deleted/moved files. For storage we chose APC, as it doesn't have the disk I/O overhead. With regards to file deletes, it doesn't matter, as you probably don't wanna require them anywhere anyway. As to moves... that's unresolved (we ignore it as historically it didn't happen very often for us). Any other solutions?

    Read the article

  • Approach for parsing file and creating dynamic data structure for use by another program

    - by user275633
    All, Background: I have a customer who has some build scripts for their datacenter based on python that I've inherited. I did not work on the original design so I'm sort of limited to some degree on what I can and can't change. That said, my customer has a properties file that they use in their datacenter. Some of the values are used to build their servers and unfortunately they have other applications that also use these values so I cannot change them to make it easier for me. What I want to do is make the scripts more dynamic to distribute more hosts so that I don't have to keep updating the scripts in the future and can just add more hosts to the property file. Unfortunately I can't change the current property file and have to work with it. The property file looks something like this: projectName.ClusterNameServer1.sslport=443 projectName.ClusterNameServer1.port=80 projectName.ClusterNameServer1.host=myHostA projectName.ClusterNameServer2.sslport=443 projectName.ClusterNameServer2.port=80 projectName.ClusterNameServer2.host=myHostB In their deployment scripts they basically have alot of if projectName.ClusterNameServerX where X is some number of entries defined and then do something, e.g.: if projectName.ClusterNameServer1.host != "" do X if projectName.ClusterNameServer2.host != "" do X if projectName.ClusterNameServer3.host != "" do X Then when they add another host (say Serve4) they've added another if statement. Question: What I would like to do is make the scripts more dynamic and parse the properties file and put what I need into some data structure to pass to the deployment scripts and then just iterate over the structure and do my deployment that way so I don't have to constantly add a bunch of if some host# do something. I'm just curious to feed some suggestions as to what others would do to parse the file and what sort of data structure would they use and how they would group things together by ClusterNameServer# or something else. Thanks

    Read the article

  • xml parameter for soap webservice request in iphone

    - by Jayshree
    Hello everybody. I want to send a soap request to a webservice method. Now i want to send the parameter as an xml input for that request. so plz can anybody give me an idea of how to do it in iphone? Right now i am sending the request in following way : NSString *soapMessage = @"<?xml version=\"1.0\" encoding=\"utf-8\"?>\n" "<soap:Envelope xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope\">\n" "<soap:Body>\n" "<GetCustomerInfoXML xmlns=\"http://qa2.alliancetek.com/phpwebservice\">\n" "<id><customer><id>1</id></customer></id>" "</GetCustomerInfoXML>" "</soap:Body>\n" "</soap:Envelope>"; NSLog(@"%@",soapMessage); NSURL *url = [NSURL URLWithString:@"http://qa2.alliancetek.com/phpwebservice/index.php"]; NSMutableURLRequest *theRequest = [NSMutableURLRequest requestWithURL:url]; NSString *msgLength = [NSString stringWithFormat:@"%d", [soapMessage length]]; [theRequest addValue: @"text/xml; charset=utf-8" forHTTPHeaderField:@"Content-Type"]; [theRequest addValue: @"http://qa2.alliancetek.com/phpwebservice/index.php//GetCustomerInfoXML" forHTTPHeaderField:@"SOAPAction"]; [theRequest addValue: msgLength forHTTPHeaderField:@"Content-Length"]; [theRequest setHTTPMethod:@"POST"]; [theRequest setHTTPBody: [soapMessage dataUsingEncoding:NSUTF8StringEncoding]]; But i want to send the parameters in xml format. so how do i do it????

    Read the article

  • please help me for performing serch in my program

    - by Abid
    i want to perform searching in my programe.. i have my class in which i have made a function i.e. public DataTable Search() { string SQL = "Select * from Customer where " + mField + " like '%" + mValue + "%'"; DataTable dt = new DataTable(); dt = dm.GetData(SQL); return (dt); } in which i have made setters and getters for mField and mValue.. where dm is the object of class Datamanagement in which i have made a function GetData i.e. public DataTable GetData(string SQL) { SqlCommand command = new SqlCommand(); SqlDataAdapter dbAdapter = new SqlDataAdapter(); DataTable DataTable = new DataTable(); command.Connection = clsConnection.GetConnection(); command.CommandText = SQL; dbAdapter.SelectCommand = command; dbAdapter.Fill(DataTable); return (DataTable); } and behind the search button, i have written.. private void btnfind_Click(object sender, EventArgs e) { //cust is the object of class customer// if (tbCustName.Text != "") { cust.Field="CustName"; cust.Value = tbCustName.Text; } else if (tbAddress.Text != "") { cust.Value = tbAddress.Text; cust.Field="Address"; } else if (tbEmail.Text != "") { cust.Value = tbEmail.Text; cust.Field="Email"; } else if (tbCell.Text != "") { cust.Value = tbCell.Text; cust.Field = "Cell"; } DataTable dt = new DataTable(); dt = cust.Search(); dgCustomer.DataSource = dt; RefreshGrid(); } where my referesh grid fuction does that : private void RefreshGrid() { DataTable dt = new DataTable(); dt = cust.GetCustomers(); dgCustomer.DataSource = dt; } but this is not working.. i dont knw y.. please help..

    Read the article

  • Keeping Linq to SQL alive when using ViewModels (ASP.NET MVC)

    - by Kohan
    I have recently started using custom ViewModels (For example, CustomerViewModel) public class CustomerViewModel { public IList<Customer> Customers{ get; set; } public int ProductId{ get; set; } public CustomerViewModel(IList<Customer> customers, int productId) { this.Customers= customers; this.ProductId= productId; } public CustomerViewModel() { } } ... and am now passing them to my view instead of the Entities themselves (for example, var Custs = repository.getAllCusts(id) ) as it seems good practice to do so. The problem i have encountered is that when using ViewModels; by the time it has got to the the view i have lost the ability to lazy load on customers. I do not believe this was the case before using them. Is it possible to retain the ability of Lazy Loading while still using ViewModels? Or do i have to eager load using this method? Thanks, Kohan.

    Read the article

  • Tools for managing code deployment/versioning for IIS / Windows enviroments

    - by RizwanK
    I've got a strong background in Linux and OSX, and just left a job where I was architecting systems based on those platforms. Now I've got a Windows Server running IIS that has a number of different websites that it hosts. Most of them are just a bunch of HTML, JS and Images, with some ASP for some customer tools. (Each website has a different set of customer tools, or they are the same tools, but with minor code changes between them.) I'm also adding a develop web server with the same code, but the 'bleeding edge' stuff. I need an effective way of managing changes and updates to the overall codebase (henceforth referring to both the images and the html and the asp, for all the sites). When a dev (or webmaster) checks in changes, I want it to show up automatically on the developer server, but should be manually pushed out to the live server. I'd be tempted to just make the websites SVN repositories, but I'd be concerned about the overhead of having the webdeveloper having to log into the server and trigger an SVN update via commandline/tortise (and heaven forbid, manage tags). Ideally I'd also manage IIS profile settings between the systems, but the major need is to be able to manage the process, and expose it to our ASP developer, and our webmaster, both of which are used to just FTPing up the files to the live site. So, any recommendations on tools (beyond some SVN hacking with BAT files + teaching the webmaster how to log into the server and do updates) or workflows that would help this out? I even considered an RPM type package (or some Windows equivalent, of course) to manage the live server, but that seems like a bit too much overhead. Thanks.

    Read the article

  • Is the Subversion 'stack' a realistic alternative to Team Foundation Server?

    - by Robert S.
    I'm evaluating Microsoft Team Foundation Server for my customer, who currently uses Visual SourceSafe and nothing else. They have explicitly expressed a desire to implement a more rigid and process-driven environment as their application is in production and they have future releases to consider. The particular areas I'm trying to cover are: Configuration management (e.g., source control) Change management (workflow and doco for change requests, tasks) Release management (builds and deployments) Incident and problem management (issues and bugs) Document management (similar to source control, but available via web) Code analysis constraints on check-ins A testing framework Reporting Visual Studio 2008 integration TFS does all of these things quite well, but it's expensive and complex to maintain, and the inexpensive Workgroup edition doesn't scale. We don't get TFS as part of our MSDN subscription. Those problems can be overcome, but before I tell my customer to go the TFS route, which in itself isn't a terrible thing, I wanted to evaluate the alternatives. I know Subversion is often suggested for its configuration management/source control, but what about the other areas? Would a combination of Subversion/NUnit/Wiki/CruiseControl/NAnt/something else satisfy all of these requirements? What tools do I need to include in my evaluation? Or should I just bite the bullet and go with TFS since we're already invested in the Microsoft stack?

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >