Search Results

Search found 5505 results on 221 pages for 'customer showcase'.

Page 137/221 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • How to distribute email's delivery between 2 or more servers

    - by user181186
    We provide Email Marketing service through our online App. We have about 30 customers. And each one has it's own mailling list (5k to 20k emails each). What we really want is to distribute email's delivery between 2 or more servers. I was wondering What kind of aproach/solutions MailChimp , Constant Contact uses to provide a great service ? use many servers ? many IPs ? Our spam policy suspends ANY user/customer that gets 10% bounced .

    Read the article

  • Should we develop a custom membership provider in this case?

    - by Allen
    I'll be adding a bounty to this, probably 200, more if you guys think its appropriate. I wont accept an answer until I can add a bounty so feel free to go ahead and answer now Summary Long story short, we've been tasked with gutting the authentication and authorization parts of a fairly old and bloated asp.net application that previously had all of these components written from scratch. Since our application isn't a typical one, and none of us have experience in asp.net's built in membership provider stuff, we're not sure if we should roll our own authentication and authorization again or if we should try to work within the asp.net membership provider mindset and develop our own membership provider. Our Application We have a fairly old asp.net application that gets installed at customer locations to service clients on a LAN. Admins create users (users do not sign up) and depending on the install, we may have the software integrated with LDAP. Currently, the LDAP integration bulk-imports the users to our database and when they login, it authenticates against LDAP so we dont have to manage their passwords. Nothing amazing there. Admins can assign users to 1 group and they can change the authorization of that group to manage access to various parts of the software. Groups are maintained by Admins (web based UI) and as said earlier, granted / denied permissions to certain functionality within the application. All this was completely written from the ground up without using any of the built in .net authorization or authentication. We literally have IsLoggedIn() methods that check for login and redirect to our login page if they aren't. Our Rewrite We've been tasked to integrate more tightly with LDAP, they want us to tie groups in our application to groups (or whatever types of containers that LDAP uses) in LDAP so that when a customer opt's to use our LDAP integration, they dont have to manage their users in LDAP AND in our application. The new way, they will simply create users in LDAP, add them to Groups in LDAP and our application will see that they belong to the appropriate LDAP group and authenticate and authorize them. In addition, we've been granted the go ahead to completely rip out the User authentication and authorization code and completely re-do it. Our Problem The problem is that none of us have any experience with asp.net membership provider functionality. The little bit of exposure I have to it makes me worry that it was not intended to be used for an application such as ours. Though, developing our own ASP.NET Membership Provider and Role Manager sounds like it would be a great experience and most likely the appropriate thing to do. Basically, I'm looking for advice, should we be using the ASP.NET Membership provider & Role Management API or should we continue to roll our own? I know this decision will be influenced by our requirements so I'm going over them below Our Requirements Just a quick n dirty list Maintain the ability to have a db of users and authenticate them and give admins (only, not users) the ability to CRUD users Allow the site to integrate with LDAP, when this is chosen, they don't want any users stored in the DB, only the relationship between Groups as they exist in our app / db and the Groups/Containers as they exist in LDAP. .net 3.5 is being used (mix of asp.net webforms and asp.net mvc) Has to work in ASP.NET and ASP.NET MVC (shouldn't be a problem I'm guessing) This can't be user centric, administrators need to be the only ones that CRUD (or import via ldap) users and groups We have to be able to Auth via LDAP when its configured to do so I always try to monitor my questions closely so feel free to ask for more info. Also, as a general summary of what I'm looking for in an answer is just. "You should/shouldn't use xyz, here's why". Links regarding asp.net membership provider and role management stuff are very welcome, most of the stuff I'm finding is 5+ years old. Edit: Added some stuff to "Our Rewrite"

    Read the article

  • jQuery / jqGrids / Submitting form data troubles...

    - by Kelso
    Ive been messing with jqgrids alot of the last couple days, and I have nearly everything the way I want it from the display, tabs with different grids, etc. Im wanting to make use of Modal for adding and editing elements on my grid. My problem that Im running into is this. I have my editurl:"editsu.php" set, if that file is renamed, on edit, i get a 404 in the modal.. great! However, with that file in place, nothing at all seems to happen. I even put a die("testing"); line at the top, so it sees the file, it just doesnt do anything with it. Below is the content. ........ the index page jQuery("#landings").jqGrid({ url:'server.php?tid=1', datatype: "json", colNames:['ID','Tower','Sector', 'Client', 'VLAN','IP','DLink','ULink','Service','Lines','Freq','Radio','Serial','Mac'], colModel:[ {name:'id', index:'id', width : 50, align: 'center', sortable:true,editable:true,editoptions:{size:10}}, {name:'tower', index:'tower', width : 85, align: 'center', sortable:true,editable:false,editoptions:{readonly:true,size:30}}, {name:'sector', index:'sector', width : 50, align: 'center',sortable:true,editable:true,editoptions:{readonly:true,size:20}}, {name:'customer',index:'customer', width : 175, align: 'left', editable:true,editoptions:{readonly:true,size:35}}, {name:'vlan', index:'vlan', width : 35, align: 'left',editable:true,editoptions:{size:10}}, {name:'suip', index:'suip', width : 65, align: 'left',editable:true,editoptions:{size:20}}, {name:'datadl',index:'datadl', width:55, editable: true,edittype:"select",editoptions:{value:"<? $qr = qquery("select * from datatypes"); while ($q = ffetch($qr)) {echo "$q[id]:$q[name];";}?>"}}, {name:'dataul', index:'dataul', width : 55, editable: true,edittype:"select",editoptions:{value:"<? $qr = qquery("select * from datatypes"); while ($q = ffetch($qr)) {echo "$q[id]:$q[name];";}?>"}}, {name:'servicetype', index:'servicetype', width : 85, editable: true,edittype:"select",editoptions:{value:"<? $qr = qquery("select * from servicetype"); while ($q = ffetch($qr)) {echo "$q[id]:$q[name];";}?>"}}, {name:'voicelines', index:'voicelines', width : 35, align: 'center',editable:true,editoptions:{size:30}}, {name:'freqname', index:'freqname', width : 35, editable: true,edittype:"select",editoptions:{value:"<? $qr = qquery("select * from freqband"); while ($q = ffetch($qr)) {echo "$q[id]:$q[name];";}?>"}}, {name:'radioname', index:'radioname', width : 120, editable: true,edittype:"select",editoptions:{value:"<? $qr = qquery("select * from radiotype"); while ($q = ffetch($qr)) {echo "$q[id]:$q[name];";}?>"}}, {name:'serial', index:'serial', width : 100, align: 'right',editable:true,editoptions:{size:20}}, {name:'mac', index:'mac', width : 120, align: 'right',editable:true,editoptions:{size:20}} ], rowNum:20, rowList:[30,50,70], pager: '#pagerl', sortname: 'sid', mtype: "GET", viewrecords: true, sortorder: "asc", altRows: true, caption:"Landings", editurl:"editsu.php", height:420 }); jQuery("#landings").jqGrid('navGrid','#pagerl',{edit:true,add:true,del:false,search:false},{height:400,reloadAfterSubmit:false},{height:400,reloadAfterSubmit:false},{reloadAfterSubmit:false},{}); now for the editsu.php file.. $operation = $_REQUEST['oper']; if ($operation == "edit") { qquery("UPDATE customers SET vlan = '".$_POST['vlan']."', datadl = '".$_POST['datadl']."', dataul = '".$_POST['dataul']."', servicetype = '".$_POST['servicetype']."', voicelines = '".$_POST['voicelines']."', freqname = '".$_POST['freqname']."', radioname = '".$_POST['radioname']."', serial = '".$_POST['serial']."', mac = '".$_POST['mac']."' WHERE id = '".$_POST['id']."'") or die(mysql_error()); } Im just having a hard time troubleshooting this to figure out where its getting hung up at. My next question after this would be to see if its possible to make it so when you click "add", that it auto inserts a row into the db with a couple variable predtermined and then bring up the modal window, but ill work on the first problem first. thanks!

    Read the article

  • SharePoint 2010 Licensing Costs

    - by akil.franklin
    We will be implementing a public-facing website in SharePoint 2010 and I have a few questions regarding licensing: Is there any (relatively) reliable pricing information available for SharePoint 2010? What about rumors? What edition of SharePoint 2010 would be appropriate for a publicly facing website (in 2007, you needed Enterprise for this, but it seems that WCM functionality is included in Standard in 2010)? What would be a reasonable number to budget for SharePoint 2010 licensing for a publicly facing website? Note: I have tried asking Microsoft directly. Unless you are a volume license customer, they direct you to a reseller (like CDW). Unfortunately, none of the resellers have the pricing for 2010 yet. The sku isn't even in their system. I was able to get in touch with the Microsoft Pre-Sales team and they confirmed that the price list will for 2010 will be published on May 3rd (or thereabouts), but they weren't able to give me a price. Thanks in advance for your help!

    Read the article

  • CharField values disappearing after save (readonly field)

    - by jamida
    I'm implementing simple "grade book" application where the teacher would be able to update the grades w/o being allowed to change the students' names (at least not on the update grade page). To do this I'm using one of the read-only tricks, the simplest one. The problem is that after the SUBMIT the view is re-displayed with 'blank' values for the students. I'd like the students' names to re-appear. Below is the simplest example that exhibits this problem. (This is poor DB design, I know, I've extracted just the relevant parts of the code to showcase the problem. In the real example, student is in its own table but the problem still exists there.) models.py class Grade1(models.Model): student = models.CharField(max_length=50, unique=True) finalGrade = models.CharField(max_length=3) class Grade1OForm(ModelForm): student = forms.CharField(max_length=50, required=False) def __init__(self, *args, **kwargs): super(Grade1OForm,self).__init__(*args, **kwargs) instance = getattr(self, 'instance', None) if instance and instance.id: self.fields['student'].widget.attrs['readonly'] = True self.fields['student'].widget.attrs['disabled'] = 'disabled' def clean_student(self): instance = getattr(self,'instance',None) if instance: return instance.student else: return self.cleaned_data.get('student',None) class Meta: model=Grade1 views.py from django.forms.models import modelformset_factory def modifyAllGrades1(request): gradeFormSetFactory = modelformset_factory(Grade1, form=Grade1OForm, extra=0) studentQueryset = Grade1.objects.all() if request.method=='POST': myGradeFormSet = gradeFormSetFactory(request.POST, queryset=studentQueryset) if myGradeFormSet.is_valid(): myGradeFormSet.save() info = "successfully modified" else: myGradeFormSet = gradeFormSetFactory(queryset=studentQueryset) return render_to_response('grades/modifyAllGrades.html',locals()) template <p>{{ info }}</p> <form method="POST" action=""> <table> {{ myGradeFormSet.management_form }} {% for myform in myGradeFormSet.forms %} {# myform.as_table #} <tr> {% for field in myform %} <td> {{ field }} {{ field.errors }} </td> {% endfor %} </tr> {% endfor %} </table> <input type="submit" value="Submit"> </form>

    Read the article

  • Backup to disk, encrypted, without any installed local software

    - by user30064
    Hi, Ok, this is a tough one, and it might not even be possible, but no harm in asking I guess. I have a Buffalo Terastation file server that I use for network attached storage. After a couple of phone calls to customer services I realised that there is no way to backup to disk encrypted. In effect, I would be carrying unencrypted company data off-site daily, which is obviously unacceptable. I had a go at TrueCrypt, EncFS, and a few others, and as far as I could see all of them required that you install some software on the machine that is to use the file system, which makes sense. Unfortunately the firmware on the Terastation is closed and I cannot install any software (and I can't build from source either, since Buffalo didn't include a compiler). Are there any ways to copy files to disk, where as soon as they are written to the disk they are transparently encrypted, without having to install additional software? I'm not sure it matters too much, but the Terastation firmware is Linux based, although as I mentioned, closed. Many thanks, Andreas

    Read the article

  • How to reduce Entity Framework 4 query compile time?

    - by Rup
    Summary: We're having problems with EF4 query compilation times of 12+ seconds. Cached queries will only get us so far; are there any ways we can actually reduce the compilation time? Is there anything we might be doing wrong we can look for? Thanks! We have an EF4 model which is exposed over the WCF services. For each of our entity types we expose a method to fetch and return the whole entity for display / edit including a number of referenced child objects. For one particular entity we have to .Include() 31 tables / sub-tables to return all relevant data. Unfortunately this makes the EF query compilation prohibitively slow: it takes 12-15 seconds to compile and builds a 7,800-line, 300K query. This is the back-end of a web UI which will need to be snappier than that. Is there anything we can do to improve this? We can CompiledQuery.Compile this - that doesn't do any work until first use and so helps the second and subsequent executions but our customer is nervous that the first usage shouldn't be slow either. Similarly if the IIS app pool hosting the web service gets recycled we'll lose the cached plan, although we can increase lifetimes to minimise this. Also I can't see a way to precompile this ahead of time and / or to serialise out the EF compiled query cache (short of reflection tricks). The CompiledQuery object only contains a GUID reference into the cache so it's the cache we really care about. (Writing this out it occurs to me I can kick off something in the background from app_startup to execute all queries to get them compiled - is that safe?) However even if we do solve that problem, we build up our search queries dynamically with LINQ-to-Entities clauses based on which parameters we're searching on: I don't think the SQL generator does a good enough job that we can move all that logic into the SQL layer so I don't think we can pre-compile our search queries. This is less serious because the search data results use fewer tables and so it's only 3-4 seconds compile not 12-15 but the customer thinks that still won't really be acceptable to end-users. So we really need to reduce the query compilation time somehow. Any ideas? Profiling points to ELinqQueryState.GetExecutionPlan as the place to start and I have attempted to step into that but without the real .NET 4 source available I couldn't get very far, and the source generated by Reflector won't let me step into some functions or set breakpoints in them. The project was upgraded from .NET 3.5 so I have tried regenerating the EDMX from scratch in EF4 in case there was something wrong with it but that didn't help. I have tried the EFProf utility advertised here but it doesn't look like it would help with this. My large query crashes its data collector anyway. I have run the generated query through SQL performance tuning and it already has 100% index usage. I can't see anything wrong with the database that would cause the query generator problems. Is there something O(n^2) in the execution plan compiler - is breaking this down into blocks of separate data loads rather than all 32 tables at once likely to help? Setting EF to lazy-load didn't help. I've bought the pre-release O'Reilly Julie Lerman EF4 book but I can't find anything in there to help beyond 'compile your queries'. I don't understand why it's taking 12-15 seconds to generate a single select across 32 tables so I'm optimistic there's some scope for improvement! Thanks for any suggestions! We're running against SQL Server 2008 in case that matters and XP / 7 / server 2008 R2 using RTM VS2010.

    Read the article

  • Removing Exchange 2010 and SBS2011 gracefully after migration to Server 2008 Std R2

    - by user145275
    We have recently completed a server replacement for a customer. They had SBS2011 using Exchange 2010. They now have Server 2008 Std R2 and Google Apps email. We have migrated the DHCP, DNS, Filserver and all 5 FSMO roles to the new 2008 R2 server (today). During the grace period for SBS2011 we intend to decomission the old server completely. Previous experience would suggest uninstalling Exchange 2010 then demote SBS2011 then remove from the domain and switch off. Can I simply demote SBS2011 without removing Exchange? Can't really find any walkthroughs on this. My concern is that if we simply turn off SBS2011 the AD is left in a mess with legacy Exchange objects making any potential reintroduction of Exchange difficult in future, plus I want to do it the right way!

    Read the article

  • Server 2008 Active Directory DNS Entries Deleted. Dcdiag unable to contact local AD controller.

    - by Jim Smith
    I've never seen anything like this. At a potential customer site, I noticed that the PC's were all unable to locate the domain controller and netlogon was failing. I fixed the DNS entries on the client PC's so the AD server was DNS server and tried to rejoin the domain. The PC was unable to locate the domain controller. On the server, I checked the DNS settings and while there is the high level AD folder, every single entry related to Active Directory appears to have been completely deleted. There are no backups from what I can tell and this has been happening for 6 months at least. Does anyone have any recommendations for repairing this? Thanks.

    Read the article

  • External optical drive won't plug 'n play...sometimes.

    - by Connor
    alright, I've looked a lot of different ways, but here goes. i have an external DVD burner that connects via USB and should be plug 'n play however sometimes when I connect it a message pops up requesting any CD or installation software it came with. It didn't come with any though. The confusing part is that if I wait a couple weeks and plug it in, then it works fine, but if I turn it off and back on the problem arises again. I've tried different computers, same problem. There's little to no customer support from the company, Ativa, and no drivers for the product. I'm hoping there's a registry trick for this. any help?

    Read the article

  • AWS SSL Load Balancer

    - by Jay Francis
    OK, I am looking for some pointers. Basically I have a white-label app/site that will allow users to setup their own domain to use for their customer front-end. We have 2 dedicated servers and a load balancer. The problem is SSL, we were thinking about using AWS ELB to handle the SSL loadbalancing, but cant seem to figure out if it will properly handle it, it seems to be setup to work with EC2 instances, but we are using externally hosted servers via a loadbalancer. A blog post by AWS looks similar to what we need but it only seems to work with EC2 instances. http://aws.typepad.com/aws/2011/08/elastic-load-balancer-ssl-support-options.html Anyone had experience setting ELS SSL load balancers up to work with external servers?

    Read the article

  • Windows update install hangs

    - by Richo
    I'm trying to deploy .Net 3.5 on an XP home SP2 machine for a customer. The install fails, with an error about WIC. Trying to install WIC on its own gave an error about not being able to verify the application, because the cryptographic service wasn't running. A reboot got that service running, but now the install just hangs on the "inspecting your current configuration" Task manager reports that DrWatson is doing a lot (constantly using 10% of processor time) but beyond that I don't know how to instrument what a Windows machine is doing, has anyone seen this before?

    Read the article

  • "Request not supported" in IPCONFIG (WinXP SP3)

    - by pablog
    In a customer PC (Windows XP SP3), suddenly the network went down: the network adapter appears with an error mark. I replaced the network card, but the new one does the same thing. When I enter IPCONFIG, XP shows this error (in standard and safe mode): Internal error occurred Request not supported Unable to query host name If I start the system with a boot cd the PC runs fine, so the problem seems to be in the XP installation. I tried: uninstalling and reinstalling the network card in the Device Manager disabling and reenabling the card netsh int ip reset netsh winsock reset catalog and a couple of "reset" programs (WinsockxpFix.exe, etc) with no luck. Is there any way to fix it without reinstalling XP? TIA, Pablo

    Read the article

  • PHP: How do I loop through every XML file in a directory?

    - by celebritarian
    Hi! I'm building a simple application. It's a user interface to an online order system. Basically, the system is going to work like this: Other companies upload their purchase orders to our FTP server. These orders are simple XML files (containing things like customer data, address information, ordered products and the quantities…) I've built a simple user interface in HTML5, jQuery and CSS — all powered by PHP. PHP reads the content of an order (using the built-in features of SimpleXML) and displays it on the web page. So, it's a web app, supposed to always be running in a browser at the office. The PHP app will display the content of all orders. Every fifteen minutes or so, the app will check for new orders. How do I loop through all XML files in a directory? Right now, my app is able to read the content of a single XML file, and display it in a nice way on the page. My current code looks like this: // pick a random order that I know exists in the Order directory: $xml_file = file_get_contents("Order/6366246.xml",FILE_TEXT); $xml = new SimpleXMLElement($xml_file); // start echo basic order information, like order number: echo $xml->OrderHead->ShopPO; // more information about the order and the customer goes here… echo "<ul>"; // loop through each order line, and echo all quantities and products: foreach ($xml->OrderLines->OrderLine as $orderline) { echo "<tr>\n". "<li>".$orderline->Quantity." st.</li>\n". "<li>".$orderline->SKU."</li>\n"; } echo "</ul>"; // more information about delivery options, address information etc. goes here… So, that's my code. Pretty simple. It only needs to do one thing — print out the content of all order files on the screen — so me and my colleagues can see the order, confirm it and deliver it. That's it. But right now — as you can see — I'm selecting one single order at a time, located in the Order directory. But how do I loop through the entire Order directory, and read aand display the content of each order (like above)? I'm stuck. I don't really know how you get all (xml) files in a directory and then do something with the files (like reading them and echo out the data, like I want to). -- I'd really appreciate some help. I'm not very experienced with PHP/server-side programming, so if you could help me out here I'd be very grateful. Thanks a lot in advance! // Björn (celebritarian at me dot com)

    Read the article

  • RAID-0 problem with a Sony sporting a new HDD

    - by redrock
    Sony Windows 7 PC. Originally had 2 x 300 Gb HDD. One HDD completely pancaked so have replaced with a new 500 Gb HDD. When both drives are connected the 300GB doesn't appear to be recognized as a 300Gb HDD as a separate entity. BIOS sees it but the operating system only sees a total of 465GB of HD space. When both disks are attached under disk management it shows one 465Gb as RAID 0 and the new drive as STxxxxxx 465Gb. My question I guess is what should I see in total HDD space and is this configured correctly as I thought I would see 2 separate drives 1x500Gb and 1x300Gb. My customer insisted that prior to the HDD crash he saw 2 drives both registering as 300Gb (a c: and d: drive).

    Read the article

  • Execution plan different on two different Sql Servers

    - by Lieven Cardoen
    On our sql server, a query takes 20 ms. If I look at the execution plan, parallelism is used, hash match, Bitmap create, ... a lot of images with two arrows pointing to the left. On sql server of a customer where our product runs, the same query takes 2500 ms. If I look at their execution plan, no parallelism or any of the things with arrows are used... I've been searching for a couple of days no why the query runs so much slower on their sql server. Is parallelism and all of the other things something that can be configured on their sql server? And how to you configure that? What are the dangers of using parallelism? Another strange thing is that on our server, the query needs some 1200 reads and 0 writes. On their sql server it needs 1.5 million reads and some 1500 writes. Why these writes when a read query is done?

    Read the article

  • sendmail is using return-path instead of from address

    - by magd1
    I have a customer that is complaining about emails marked as spam. I'm looking at the header. It shows the correct From: [email protected] However, it doesn't like the return-path. Return-Path: <[email protected]> Received-SPF: neutral (google.com: x.x.x.x is neither permitted nor denied by domain of [email protected]) client-ip=x.x.x.x; Authentication-Results: mx.google.com; spf=neutral (google.com: x.x.x.x is neither permitted nor denied by domain of [email protected]) [email protected] How do I configure sendmail to use the From address for the Return-Path?

    Read the article

  • Any ideas for automatic initial/scripted configuration of a NetApp?

    - by maddenca
    The background is to make a solution where we can automate as much as possible the building up of ‘pods’ that include server/network/storage and that will be built at remote sites. In an ideal world would be that we create a single management server which is preconfigured with DHCP/TFTP/or whatever. This management server is racked with a CISCO UCS, FAS31x0, etc. at a build site and is then transported to the final customer site where on power it almost configures itself, or at least bootstraps itself far enough that a remote skilled 'expert' can complete the setup of the pod. Ideas (doesn't have to resemble 100% of the above) would be helpful.

    Read the article

  • Reviewing firewall rules

    - by chmeee
    I need to review firewall rules of a CheckPoint firewall for a customer (with 200+ rules). I have used FWDoc in the past to extract the rules and convert them to other formats but there was some errors with exclusions. I then analyze them manually to produce an improved version of the rules (usually in OOo Calc) with comments. I know there are several visualization techniques but they all go down to analyzing the traffic and I want static analysis. So I was wondering, what process do you follow to analyze firewall rules? What tools do you use (not only for Checkpoint)?

    Read the article

  • SQL query duplicating results [on hold]

    - by Ben
    I have written a query that results in data being retrieved for the top 5 customers in my table per account manager. Here is the query: SELECT account_manager_id, mgap_ska_id, total FROM (SELECT account_manager_id, mgap_ska_id, mgap_growth + mgap_recovery AS total, @grp_rank := IF(@current_accmanid = account_manager_id, @grp_rank + 1, 1) AS grp_rank, @current_accmanid := account_manager_id FROM mgap_orders ORDER BY total DESC ) ranked WHERE grp_rank <= 5 and here is the result of the query: account_manager_id mgap_ska_id total 159840 5062352 61569.21 159840 5062352 61569.21 159840 5062352 61569.21 159840 5062352 61569.21 159840 5062352 61569.21 160023 5024546 52244.29 160023 5024546 52244.29 160023 5024546 52244.29 160023 5024546 52244.29 160023 5024546 52244.29 159669 5323292 50126.38 159669 5323292 50126.38 159669 5323292 50126.38 159669 5323292 50126.38 159669 5323292 50126.38 As you can see the query is partially working as needed, except Im getting duplicates for mgap_ska_id whereas it should be five individual mgap_ska_id numbers. and here is a sample of my data: mgap_ska_id account_manager_id mgap_growth mgap_recovery 5057810 64154 0 1160.78 5178114 24456 0 5773.42 5292421 160338 0 5146.04 5414091 24408 0 104.14 5057810 64154 0 1160.78 Can anyone see where Ive gone wrong in my query and how/where I might correct the error so I get the 5 top individual customers (mgap_ska_id) instead of the duplicated top single customer?

    Read the article

  • Recommended RAM and disc space for Oracle 11g on Windows

    - by Álvaro G. Vicario
    I need to provide the recommended amount of RAM and disc space (divided in two partitions) so the customer can create an appropriate virtual machine to run Oracle. All I could find in the documentation was a brief listing with minimum RAM and typical/advanced install types. The virtual machine will run latest Oracle Standard Edition One (11g release 2 so far) under Windows Server 2008 x64 and will host a reasonably low traffic web application. How much RAM and disc must I ask for in order to be safe? (Feel free to ask for further details if I've omitted something relevant.) Update: Rough estimations: Database size: 10 MB after installation Growth rate: +3MB per day on average Size of database 'active' data: (not sure of what this means, there's not actual archive so I guess all data is current) Amount of data written per second in peak hours: a few KB Number of client sessions: 3 or 4 at most Frequency and response size of most heavy requests: some reports make heavy table JOINS that need up to 20 seconds to complete but they won't return more than a few thousand rows with plain text. The app also handles BLOBs (typical size from 50KB to 200KB)

    Read the article

  • Version control and branching when using Oracle

    - by Ed Woodcock
    Hi folks: At work we're using Oracle and C#/ASP.net to handle a customer's website, this site is very large-scale so the database is very large. We use Perforce for our version control, and tack create or replace scripts to FogBugz cases whenever a database change, which has been fine until now, as we are now at a point where five developers are working on five expansions for the system, each on a seperate Perforce branch. Unfortunately, we cannot get duplicate databases, due to the database size, so everyone is still working from the same one. This is obviously a cause of problems: only ten minutes ago we had a bit of an issue where a stored procedure change for a branch propagated over to the Pre-Production server and caused a large number of crashes for the testers. Ideally, we would like a way to track these changes without having to manually keep track of them through FogBugz. My question is: how do you lot handle this situation? I'm sure there must be a good way by now to handle versioning, or at least tracking changes, in an Oracle database.

    Read the article

  • how to find a good data center?

    - by drewda
    At my start-up, we're getting to the point where we should be hosting our servers at a data center. I'd appreciate any tips and tricks y'all can offer on finding a reputable place to colocate our racks. Are there any Web sites with customer reviews of data centers or should I just be asking around at techie events? Are unlimited bandwidth plans a gimmick or becoming the norm? Is it worth establishing a redundant set of machines at a second data center from Day One? Or just do offsite back-ups? Thanks for your suggestions.

    Read the article

  • How can I sync Nokia smartphones' calendars with Snow Leopard Server's iCal Server?

    - by Joe Carroll
    I've just installed and configured a Mac Mini Server for a customer who wanted to stop using Google Apps for their email. The plan was to also use the new server's calendaring service but we've hit a small snag: the staff all use iCal on Macs and Nokia E71s and there doesn't seem to be a way to use a single calendar for each person that syncs with both. iSync/iCal doesn't appear to allow their manually-synced phones to sync with the CalDAV calendars on the server, only with local calendars on their Macs. Nor does Nokia's organiser software support CalDAV but rather SyncML for live networked syncing. I was wondering if there's a plugin for iCal Server that would provide a bridge to a SyncML service I could run on the server... or anything else that would work (preferably FOSS)!

    Read the article

  • RAID0 PROBLEM WITH A SONY SPORTING A NEW HDD

    - by redrock
    Sony Windows 7 PC. Originally had 2 x 300Gb HDD. One HDD completely pancaked so have replaced with a new 500Gb HDD. When both drives are connected the 300GB doesn't appear to be recognised as a 300Gb HDD as a seperate entity. BIOS sees it but the operating system only sees a total of 465GB of HD space. When both disks are attached unde disk management it shows one 465Gb as RAID 0 and the new drive as STxxxxxx 465Gb. My question I guess is what should I see in total HDD space and is this configured correctly as I thought I would see 2 seperate drives 1x500Gb and 1x300Gb. My customer insisted that prior to the HDD crash he saw 2 drives both registering as 300Gb (a c: and d: drive).

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >