Search Results

Search found 39614 results on 1585 pages for 'value chain planning'.

Page 390/1585 | < Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >

  • How to retrieve ID of button clicked within usercontrol on Asp.net page?

    - by Shawn Gilligan
    I have a page that I am working on that I'm linking multiple user controls to. The user control contains 3 buttons, an attach, clear and view button. When a user clicks on any control on the page, the resulting information is "dumped" into the last visible control on the page. <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="Default" MasterPageFile="DefaultPage.master" %> <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="ajaxToolkit" %> <%@ Register tagName="FileHandler" src="FileHandling.ascx" tagPrefix="ucFile" %> <asp:Content ID="Content1" ContentPlaceHolderID="Main" Runat="Server"> <asp:UpdatePanel ID="upPanel" UpdateMode="Conditional" runat="server"> <ContentTemplate> <table> <tr> <td> <ucFile:FileHandler ID="fFile1" runat="server" /> </td> <td> <ucFile:FileHandler ID="fFile2" runat="server" /> </td> </tr> </table> </ContentTemplate> </asp:UpdatePanel> </asp:Content> All file handling and processing is handled within the control, with an event when the upload to the file server is complete via a file name that was generated. When either button is clicked, the file name is always stored internal to the control in the last control's text box. Control code: <table style="width: 50%;"> <tr style="white-space: nowrap;"> <td style="width: 1%;"> <asp:Label runat="server" ID="lblFile" /> </td> <td style="width: 20%;"> <asp:TextBox ID="txtFile" CssClass="backColor" runat="server" OnTextChanged="FileInformationChanged" /> </td> <td style="width: 1%"> <%--<asp:Button runat="server" ID="btnUpload" CssClass="btn" Text="Attach" OnClick="UploadFile"/>--%> <input type="button" id="btnUpload" class="btn" tabindex="30" value="Attach" onclick="SetupUpload();" /> </td> <td style="width: 1%"> <%--<asp:Button runat="server" ID="btnClear" Text="Clear" CssClass="btn" OnClick="ClearTextValue"/>--%> <input type="button" id="btnClearFile" class="btn" value="Clear" onclick="document.getElementById('<%=txtFile.ClientID%>').value = '';document.getElementById('<%=hfFile.ClientID%>').value = '';" /> </td> <td style="width: 1%"> <a href="#here" onclick="ViewLink(document.getElementById('<%=hfFile.ClientID%>').value, '')">View</a> </td> <td style="width: 1%"> <asp:HiddenField ID="hfFile" runat="server" /> </td> </tr> </table> <script type="text/javascript"> var ItemPath = ""; function SetupUpload(File) { ItemPath = File; VersionAttach('<%=UploadPath%>', 'true'); } function UploadComplete(File) { document.getElementById('<%=txtFile.ClientID%>').value = File.substring(File.lastIndexOf("/") + 1); document.getElementById('<%=hfFile.ClientID%>').value = File; alert('<%=txtFile.Text %>'); alert('<%=ClientID %>') } function ViewLink(File, Alert) { if (File != "") { if (File.indexOf("../data/") != -1) { window.open(File, '_blank'); } else { window.open('../data/<%=UploadPath%>/' + File, '_blank'); } } else if (Alert == "") { alert('No file has been uploaded for this field.'); } } </script>

    Read the article

  • How to model a relationship that NHibernate (or Hibernate) doesn’t easily support

    - by MylesRip
    I have a situation in which the ideal relationship, I believe, would involve Value Object Inheritance. This is unfortunately not supported in NHibernate so any solution I come up with will be less than perfect. Let’s say that: “Item” entities have a “Location” that can be in one of multiple different formats. These formats are completely different with no overlapping fields. We will deal with each Location in the format that is provided in the data with no attempt to convert from one format to another. Each Item has exactly one Location. “SpecialItem” is a subtype of Item, however, that is unique in that it has exactly two Locations. “Group” entities aggregate Items. “LocationGroup” is as subtype of Group. LocationGroup also has a single Location that can be in any of the formats as described above. Although I’m interested in Items by Group, I’m also interested in being able to find all items with the same Location, regardless of which group they are in. I apologize for the number of stipulations listed above, but I’m afraid that simplifying it any further wouldn’t really reflect the difficulties of the situation. Here is how the above could be diagrammed: Mapping Dilemma Diagram: (http://www.freeimagehosting.net/uploads/592ad48b1a.jpg) (I tried placing the diagram inline, but Stack Overflow won't allow that until I have accumulated more points. I understand the reasoning behind it, but it is a bit inconvenient for now.) Hmmm... Apparently I can't have multiple links either. :-( Analyzing the above, I make the following observations: I treat Locations polymorphically, referring to the supertype rather than the subtype. Logically, Locations should be “Value Objects” rather than entities since it is meaningless to differentiate between two Location objects that have all the same values. Thus equality between Locations should be based on field comparisons, not identifiers. Also, value objects should be immutable and shared references should not be allowed. Using NHibernate (or Hibernate) one would typically map value objects using the “component” keyword which would cause the fields of the class to be mapped directly into the database table that represents the containing class. Put another way, there would not be a separate “Locations” table in the database (and Locations would therefore have no identifiers). NHibernate (or Hibernate) do not currently support inheritance for value objects. My choices as I see them are: Ignore the fact that Locations should be value objects and map them as entities. This would take care of the inheritance mapping issues since NHibernate supports entity inheritance. The downside is that I then have to deal with aliasing issues. (Meaning that if multiple objects share a reference to the same Location, then changing values for one object’s Location would cause the location to change for other objects that share the reference the same Location record.) I want to avoid this if possible. Another downside is that entities are typically compared by their IDs. This would mean that two Location objects would be considered not equal even if the values of all their fields are the same. This would be invalid and unacceptable from the business perspective. Flatten Locations into a single class so that there are no longer inheritance relationships for Locations. This would allow Locations to be treated as value objects which could easily be handled by using “component” mapping in NHibernate. The downside in this case would be that the domain model becomes weaker, more fragile and less maintainable. Do some “creative” mapping in the hbm files in order to force Location fields to be mapped into the containing entities’ tables without using the “component” keyword. This approach is described by Colin Jack here. My situation is more complicated than the one he describes due to the fact that SpecialItem has a second Location and the fact that a different entity, LocatedGroup, also has Locations. I could probably get it to work, but the mappings would be non-intuitive and therefore hard to understand and maintain by other developers in the future. Also, I suspect that these tricky mappings would likely not be possible using Fluent NHibernate so I would use the advantages of using that tool, at least in that situation. Surely others out there have run into similar situations. I’m hoping someone who has “been there, done that” can share some wisdom. :-) So here’s the question… Which approach should be preferred in this situation? Why?

    Read the article

  • jquery .append() not working for my html

    - by user1056998
    I have a program which appends an input(type="hidden") using jquery, to an html so that when I click the submit button, it passes the value to a php file and I can process it. However, it seems that the hidden type is not really being appended to the html nor it is being passed to the php file. I already used method="get" to see the values in the address bar and print_r to see the values being catched but there's nothing. To check if my form is actually passing a value, I added a <input type="hidden" name="absent[]" value="testing" /> in the HTML and the value got passed but the ones in the jquery aren't. Here are my files: jquery: $(function(){ $("td").click(function(){ if($(this).hasClass("on")) { alert("Already marked absent"); } else { $(this).addClass("on"); var currentCellText = $(this).text(); var temp = $(this).attr('id'); $("#collect").append("<input type='hidden' name='absent[]' value = '" + temp + "'/>" + currentCellText); alert(temp); } }); $("#clicky").click(function(){ $("td").removeClass("on"); $("#collect").text(''); $("#collect").append("Absentees: <br>") alert(temp); }); }); Here is the html part: <?php session_start(); include 'connectdb.php'; $classID = $_SESSION['csID']; $classQry = "SELECT e.csID, c.subjCode, c.section, b.subj_name, e.studentID, CONCAT(s.lname, ', ' , s.fname)name FROM ENROLLMENT e, CLASS_SCHEDULE c, STUDENT s, SUBJECT b WHERE e.csID = c.csID AND c.csID = '" . $classID . "' AND c.subjCode = b.subjCode AND e.studentID = s.studentID ORDER BY e.sort;"; $doClassQry = mysql_query($classQry); echo "<table id='tableone'>"; while($x = mysql_fetch_array($doClassQry)) { $subject = $x['subj_name']; $subjCode = $x['subjCode']; $section = $x['section']; $studentArr[] = $x['name']; $studentID[] = $x['studentID']; } echo "<thead>"; echo "<tr><th colspan = 7>" . "This is your class: " . $subjCode . " " . $section . " : " . $subject . "</th></tr>"; echo "</thead>"; echo "<tbody>"; echo "<tr>"; for($i = 0; $i < mysql_num_rows($doClassQry); $i++) { if($i % 7 == 0) { echo "</tr><tr><td id = '". $studentID[$i] . " '>" . $studentArr[$i] . "</td>"; } else { echo "<td id = '". $studentID[$i] . " '>" . $studentArr[$i] . "</td>"; } } echo "</tr>"; echo "</tbody>"; echo "</table>"; ?> Here's the html part with the form: <form name="save" action="saveTest.php" method="post"> <div id="submitt"> <input type="hidden" name="absent[]" value="testing"/> <input type="submit" value="submit"/> </div> </form> And here's the php part which processes the form (saveTest.php): <?php $absent = $_POST['absent']; //echo "absnt" . $absent[] . "<br>"; echo count($absent) . "<br>"; //print_r($_POST) . "<br>"; ?>

    Read the article

  • IIS7 web farm - local or shared content?

    - by rbeier
    We're setting up an IIS7 web farm with two servers. Should each server have its own local copy of the content, or should they pull content directly from a UNC share? What are the pros and cons of each approach? We currently have a single live server WEB1, with content stored locally on a separate partition. A job periodically syncs WEB1 to a standby server WEB2, using robocopy for content and msdeploy for config. If WEB1 goes down, Nagios notifies us, and we manually run a script to move the IP addresses to WEB2's network interface. Both servers are actually VMs running on separate VMWare ESX 4 hosts. The servers are domain-joined. We have around 50-60 live sites on WEB1 - mostly ASP.NET, with a few that are just static HTML. Most are low-traffic "microsites". A few have moderate traffic, but none are massive. We'd like to change this so both WEB1 and WEB2 are actively serving content. This is mainly for reliability - if WEB1 goes down, we don't want to have to manually intervene to fail things over. Spreading the load is also nice, but the load is not high enough right now for us to need this. We're planning to configure our firewall to balance traffic across the two servers. It will detect when a server goes down and will send all the traffic to the remaining live server. We're planning to use sticky sessions for now... eventually we may move to SQL Server session state and stateless load balancing. But we need a way for the servers to share content. We were originally planning to move all the content to a UNC share. Our storage provider says they can set up a highly available SMB share for us. So if we go the UNC route, the storage shouldn't be a single point of failure. But we're wondering about the downsides to this approach: We'll need to change the physical paths for each site and virtual directory. There are also some projects that have absolute paths in their web.config files - we'll have to update those as well. We'll need to create a domain user for the web servers to access the share, and grant that user appropriate permissions. I haven't looked into this yet - I'm not sure if the application pool identity needs to be changed to this user, or if there's another way to tell IIS to use this account when connecting to the share. Sites will no longer be able to access their content if there's ever an Active Directory problem. In general, it just seems a lot more complicated, with more moving parts that could break. Our storage provider would create a volume for us on their redundant SAN. If I understand correctly, this SAN volume would be mounted on a VM running in their redundant VMWare environment; this VM would then expose the SMB share to our web servers. On the other hand, a benefit of the shared content approach is that we'd only need to deploy code to one place, and there would never be a temporary inconsistency between multiple copies of the content. This thread is pretty interesting, though some of these people are working at a much larger scale. I've just been discussing content so far, but we also need to think about configuration. I don't know if we can just use DFS replication for the applicationHost.config and other files, or if it's best to use the shared configuration feature with the config on a UNC share. What do you think? Thanks for your help, Richard

    Read the article

  • Windows 8.1 Enterprise Sysprep Error

    - by Anurag Shetti
    I am Trying to sysprep my WIndows 8.1 enterprise (MSDN) and i get the following errors I have upgraded the Windows 8 to windows 8.1 and the machine contains all the configuration for VS 2012 and rest Exact error Sysprep was not able to validate your windows installation Error msg line in log C:\Users\André>err 0x8007139f # as an HRESULT: Severity: FAILURE (1), FACILITY_WIN32 (0x7), Code 0x139f # for hex 0x139f / decimal 5023 ERROR_INVALID_STATE winerror.h # The group or resource is not in the correct state to # perform the requested operation. # 1 matches found for "0x8007139f" SYSPRP ActionPlatform::GetValue: Error from RegQueryValueEx on value SysprepMode under key HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Setup\Sysprep; dwRet = 0x2 I have searched the HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Setup\Sysprep But i coud'nt find anything for SYSprep mode The value for sysprep was (Value not set)

    Read the article

  • ERR_INCOMPLETE_CHUNKED_ENCODING apache 2.4

    - by Bujanca Mihai
    I upgraded my Ubuntu server to 14.04 and Apache 2.4.7. Now my images don't load and console yields net::ERR_INCOMPLETE_CHUNKED_ENCODING. Also, I can sometimes see some of the images load for a little while (1 sec max) and then they disappear. .htaccess RewriteEngine On # Serve the favicon file from img folder RewriteCond %{REQUEST_URI} ^/favicon.ico$ RewriteRule ^(.*)$ /img/$1 [NC,L] # Redirect HTTP traffic to WWW subdomain RewriteCond %{HTTPS} off [NC] RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] # Redirect HTTPS traffic to WWW subdomain RewriteCond %{HTTPS} on [NC] RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ https://www.%{HTTP_HOST}/$1 [R=301,L] # Auto Versioning rules RewriteCond %{REQUEST_FILENAME} !-s RewriteRule ^(.*)\.[\d]+\.(css|js)$ $1.$2 [L] # Default Zend rewrite rules RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] VHost <VirtualHost *:80> ServerAdmin admin@localhost ServerName localhost DocumentRoot /home/mihai/ARTD/www/public/website # Omit this in production environment SetEnv APPLICATION_ENV local <Directory /home/mihai/ARTD/www/public/website > Options Indexes FollowSymLinks MultiViews AllowOverride All #Order deny,allow #Allow from all Require all granted </Directory> <IfModule mod_php5.c> php_value memory_limit 128M php_value upload_max_filesize 20M php_value post_max_size 20M </IfModule> ErrorLog /var/log/apache2/ARTD-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/ARTD-access.log combined </VirtualHost> <IfModule mod_ssl.c> <VirtualHost *:443> ServerAdmin admin@localhost ServerName localhost DocumentRoot /home/mihai/ARTD/www/public/website # Omit this in production environment SetEnv APPLICATION_ENV local <Directory /home/mihai/ARTD/www/public/website > Options Indexes FollowSymLinks MultiViews AllowOverride All #Order deny,allow #Allow from all Require all granted </Directory> <IfModule mod_php5.c> php_value memory_limit 128M php_value upload_max_filesize 20M php_value post_max_size 20M </IfModule> ErrorLog /var/log/apache2/ARTD-ssl-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/ARTD.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire #<FilesMatch "\.(cgi|shtml|phtml|php)$"> # SSLOptions +StdEnvVars #</FilesMatch> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. #BrowserMatch ".*MSIE.*" \ # nokeepalive ssl-unclean-shutdown \ # downgrade-1.0 force-response-1.0 </VirtualHost> </IfModule> logs Apache/2.4.7 (Ubuntu) PHP/5.5.9-1ubuntu4.3 OpenSSL/1.0.1f (internal dummy connection) 127.0.0.1 - - [25/Aug/2014:13:09:53 +0300] "GET /img/header/top-nav-separator.png HTTP/1.1" 200 462 "https://localhost/art" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.132 Safari/537.36"

    Read the article

  • Understanding how Tracert works

    - by iridescent
    From what I gathered so far, Tracert works by sending 3 ICMP echo messages. Starting with a TTL value of 1. For each router the packet encounters, the TTL value will be decremented. For the 1st router, 1-1 = 0, so an ICMP "time exceeded" message will be sent back to the sender machine. Next, the TTL value will be incremented to 2 by the sender machine and the cycle repeats for the 2nd router (2--1--0) and so on. Please correct me if my undestanding is flawed. I am curious as to why the ICMP "time exceeded" message isn't displayed by Tracert in Command Prompt since it is in fact an error message ? The cycle simply proceeds on. Thanks.

    Read the article

  • Exchange Server 2007 Setup

    - by AlamedaDad
    Hi, I'm working on a upgrade to Exchange 2007 and I wanted to get some advise on hardware choices. We currently have an Exchange 2003 STD server with 400 users split between 6 AD Sites, that is housed on a single server. We need to move to a redundant, fault tolerant system to support our users. I'm planning on installing 2 Dell 1950 servers with W2k8-std to act as CAS and Hub servers, with NLB to allow abstraction of the actual server name to the users. There won't be an edge system since we have a Barracuda box already that will handle in/out spam/virus filtering. Backend I'm planning on 2 mailbox servers which will be Dell 2950s with 16GB RAM, 2 either dual-core or quad-core CPUs and 6 300GB SAS drives in some RAID config. These systems will be clustered using W2k8 Ent clustering and running CCR in Exchange. My questions are as follows: Is 16GB enough RAM for serving that many mailboxes along with the windows clustering and ccr? I'm trying to figure out disk layouts and I'm unsure of whether to use all local disk or some local and some SAN, via an OpenFiler iSCSI server. The SAN would be a Dell 2850 with 6 - 300GB SCSI drives and a PERC controller to slice as I want, with 8GB RAM. Option 1: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 1 - Mail stores Option 2: 2 drives, RAID 1 - OS and logs 4 drives, RAID 5 - Mail Stores and scratch space for eseutil. Option 3: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 0 - scratch space ~300GB iSCSI volume for mail stores Option 4: 2 drives, RAID 1 - OS 4 drives, RAID 5 - scratch space ~300GB iSCSI volume for mail stores ~300GB iSCSI volume for logs I have 2 sockets for CPUs and need to chose between dual and quad cores. The dual core have faster clocks but less cache and I'm thinking older architecture. Am I better off with more cores and cache while sacraficing clock speed? I am planning on adding the new E2K7 cluster to the E2K3 server and then move each mailbox over, all at once, then remove the old server. This seems more complicated than simply getting rid of the 2003 server and then adding the 2007 cluster and restoring the mailboxes using PowerControls or exmerge. The migration option lets me do this on my time, where a cutover means it all needs to work at once. If I go with the cutover method, how can I prebuild the servers and add them to the domain right after removing the 2003 server, or can't I? I think the answer is no and the migration is my only real option if I want to prebuild. I need to also migrate about 30GB of Public Folders. Is there anything special about this, other than specifying in the E2K7 install that I want older Outlook clients and PF's setup? I guess I could even keep the E2K3 server to host just the PFs? Lastly, if I have a mix of Outlook 200, 2003 and 2007 what do I need to do to make sure they all have access to the GAL and OAB? At time of cutover, we'll be at like 90% 2007, but we will have some older stuff around. My plan is to use Outlook Anywhere on laptops that are used outside the physical network. Are there any gotchas involved in that? I'm even thinking about using is for all Outlook clients, does anyone do that? The reason I'm considering it is that our WAN is really VPN tunnels over internet connections, so not a fully messhed, stable WAN. Thank you all very much for the assistance in advance and I look forward to discussion of these points! Regards...Michael

    Read the article

  • Forwarding udp ports iptables packets "lost"?

    - by Dindihi
    I have a Linux router (Debian 6.x) where i forward some ports to internal services. Some tcp ports (like 80, 22...) are OK. I have one Application listening on port 54277udp. No return is coming from this app, i only get Data on this port. Router: cat /proc/sys/net/ipv4/conf/all/rp_filter = 1 cat /proc/sys/net/ipv4/conf/eth0/forwarding = 1 cat /proc/sys/net/ipv4/conf/ppp0/forwarding = 1 $IPTABLES -t nat -I PREROUTING -p udp -i ppp0 --dport 54277 -j DNAT --to-destination $SRV_IP:54277 $IPTABLES -I FORWARD -p udp -d $SRV_IP --dport 54277 -j ACCEPT Also MASQUERADING internal traffic to ppp0(internet) is active & working. Default Policy INPUT&OUTPUT&FORWARD is DROP What is strange, when i do: tcpdump -p -vvvv -i ppp0 port 54277 I get a lot of traffic: 18:35:43.646133 IP (tos 0x0, ttl 57, id 0, offset 0, flags [DF], proto UDP (17), length 57) source.ip > own.external.ip..54277: [udp sum ok] UDP, length 29 18:35:43.652301 IP (tos 0x0, ttl 57, id 0, offset 0, flags [DF], proto UDP (17), length 57) source.ip > own.external.ip..54277: [udp sum ok] UDP, length 29 18:35:43.653324 IP (tos 0x0, ttl 57, id 0, offset 0, flags [DF], proto UDP (17), length 57) source.ip > own.external.ip..54277: [udp sum ok] UDP, length 29 18:35:43.655795 IP (tos 0x0, ttl 57, id 0, offset 0, flags [DF], proto UDP (17), length 57) source.ip > own.external.ip..54277: [udp sum ok] UDP, length 29 18:35:43.656727 IP (tos 0x0, ttl 57, id 0, offset 0, flags [DF], proto UDP (17), length 57) source.ip > own.external.ip..54277: [udp sum ok] UDP, length 29 18:35:43.659719 IP (tos 0x0, ttl 57, id 0, offset 0, flags [DF], proto UDP (17), length 57) source.ip > own.external.ip..54277: [udp sum ok] UDP, length 29 tcpdump -p -i eth0 port 54277 (on the same machine, the router) i get much less traffic. also on the destination $SRV_IP there are only a few packets coming in, but not all. INTERNAL SERVER: 19:15:30.039663 IP source.ip.52394 > 192.168.215.4.54277: UDP, length 16 19:15:30.276112 IP source.ip.52394 > 192.168.215.4.54277: UDP, length 16 19:15:30.726048 IP source.ip.52394 > 192.168.215.4.54277: UDP, length 16 So some udp ports are "ignored/dropped" ? Any idea what could be wrong? Edit: This is strange: The Forward rule has data packets, but the PREROUTING rule has 0 packets... iptables -nvL -t filter |grep 54277 Chain FORWARD (policy DROP 0 packets, 0 bytes) 168 8401 ACCEPT udp -- * * 0.0.0.0/0 192.168.215.4 state NEW,RELATED,ESTABLISHED udp dpt:54277 iptables -nvL -t nat |grep 54277 Chain PREROUTING (policy ACCEPT 405 packets, 24360 bytes) 0 0 DNAT udp -- ppp0 * 0.0.0.0/0 my.external.ip udp dpt:54277 state NEW,RELATED,ESTABLISHED to:192.168.215.4

    Read the article

  • How do I protect a low budget network from rogue DHCP servers?

    - by Kenned
    I am helping a friend manage a shared internet connection in an apartment buildling with 80 apartments - 8 stairways with 10 apartments in each. The network is laid out with the internet router at one end of the building, connected to a cheap non-managed 16 port switch in the first stairway where the first 10 apartments are also connected. One port is connected to another 16 port cheapo switch in the next stairway, where those 10 apartments are connected, and so forth. Sort of a daisy chain of switches, with 10 apartments as spokes on each "daisy". The building is a U-shape, approximately 50 x 50 meters, 20 meters high - so from the router to the farthest apartment it’s probably around 200 meters including up-and-down stairways. We have a fair bit of problems with people hooking up wifi-routers the wrong way, creating rogue DHCP servers which interrupt large groups of the users and we wish to solve this problem by making the network smarter (instead of doing a physical unplugging binary search). With my limited networking skills, I see two ways - DHCP-snooping or splitting the entire network into separate VLANS for each apartment. Separate VLANS gives each apartment their own private connection to the router, while DHCP snooping will still allow LAN gaming and file sharing. Will DHCP snooping work with this kind of network topology, or does that rely on the network being in a proper hub-and-spoke-configuration? I am not sure if there are different levels of DHCP snooping - say like expensive Cisco switches will do anything, but inexpensive ones like TP-Link, D-Link or Netgear will only do it in certain topologies? And will basic VLAN support be good enough for this topology? I guess even cheap managed switches can tag traffic from each port with it’s own VLAN tag, but when the next switch in the daisy chain receives the packet on it’s “downlink” port, wouldn’t it strip or replace the VLAN tag with it’s own trunk-tag (or whatever the name is for the backbone traffic). Money is tight, and I don’t think we can afford professional grade Cisco (I have been campaigning for this for years), so I’d love some advice on which solution has the best support on low-end network equipment and if there are some specific models that are recommended? For instance low-end HP switches or even budget brands like TP-Link, D-Link etc. If I have overlooked another way to solve this problem it is due to my lack of knowledge. :)

    Read the article

  • Oracle parameter array binding from c# executed parallel and serial on different servers

    - by redir_dev_nut
    I have two Oracle 9i 64 bit servers, dev and prod. Calling a procedure from a c# app with parameter array binding, prod executes the procedure simultaneously for each value in the parameter array, but dev executes for each value serially. So, if the sproc does: select count(*) into cnt from mytable where id = 123; if cnt = 0 then insert into mytable (id) values (123); end if; Assuming the table initially does not have an id = 123 row. Dev gets cnt = 0 for the first array parameter value, then 1 for each of the subsequent. Prod gets cnt = 0 for all array parameter values and inserts id 123 for each. Is this a configuration difference, an illusion due to speed difference, something else?

    Read the article

  • SSIS DSN Not Showing as ODBC Data Source

    - by user1114330
    I have been following the directions here: http://social.msdn.microsoft.com/Forums/en-US/sqlintegrationservices/thread/05ccd778-b78c-4a83-a10a-c4ae412cc6e4 And ran into a problem where my System DSN is not showing up as a ODBC provider. I found this which seemed promising: http://support.microsoft.com/kb/2000277 I was not able to delete the key but followed but I did what was suggested: "If unable to delete the key, double-click the key and erase the Data value entered. Once done, the value should read ' (value not set) '" However, after following the instructions my System DSN still does not appear as an option. The USER DSN however does show...has shown but does not work as I get a permissions error. Any ideas?

    Read the article

  • Making hosts accessible between LAN subnets

    - by nixnotwin
    I have two inerfaces on my router with tomato firmwre: br0 and vlan4. br0 is on 192.168.0.0/16 subnet and vlan4 on 10.0.1.0/24 subnet. As I don't want the different network services on br0 available on vlan4, I have added this firewall rule: iptables -I INPUT -i vlan4 -j ACCEPT; iptables -I FORWARD -i vlan4 -o vlan2 -m state --state NEW -j ACCEPT; iptables -I FORWARD -i br0 -o vlan4 -j DROP; vlan2 is my WAN (internet acess). The issue that I want to solve is that I want to make one host from 192.168.0.0/16 network (br0), which has ip 192.168.0.50, available on vlan4 (10.0.1.0/24). Only that host should be available on vlan4 (and all other hosts on br0 should be inaccessible). What firewall rules can be used to do it? Edit 1: Output of iptables -nvL FORWARD: Chain FORWARD (policy DROP 4 packets, 204 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- vlan4 192.168.0.50 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- vlan4 ppp0 0.0.0.0/0 0.0.0.0/0 state NEW 229 13483 ACCEPT all -- vlan4 vlan2 0.0.0.0/0 0.0.0.0/0 state NEW 0 0 DROP all -- br0 vlan3 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- vlan3 ppp0 0.0.0.0/0 0.0.0.0/0 state NEW 67 3405 ACCEPT all -- vlan3 vlan2 0.0.0.0/0 0.0.0.0/0 state NEW 0 0 ACCEPT all -- br0 br0 0.0.0.0/0 0.0.0.0/0 34 1360 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID 758 40580 TCPMSS tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU 11781 2111K restrict all -- * vlan2 0.0.0.0/0 0.0.0.0/0 26837 19M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 wanin all -- vlan2 * 0.0.0.0/0 0.0.0.0/0 287 15927 wanout all -- * vlan2 0.0.0.0/0 0.0.0.0/0 283 15723 ACCEPT all -- br0 * 0.0.0.0/0 0.0.0.0/0 0 0 upnp all -- vlan2 * 0.0.0.0/0 0.0.0.0/0 Output of iptables -t nat -nvL PREROUTING: Chain PREROUTING (policy ACCEPT 6887 packets, 526K bytes) pkts bytes target prot opt in out source destination 855 83626 WANPREROUTING all -- * * 0.0.0.0/0 222.228.137.223 0 0 DROP all -- vlan2 * 0.0.0.0/0 192.168.0.0/16 0 0 DNAT udp -- * * 192.168.0.0/16 !192.168.0.0/16 udp dpt:53 to:192.168.0.1

    Read the article

  • PHP-APC is exceeding limit apc.shm_size often

    - by user142741
    I am implementing the apc on a shared server that currently has 1000 sites (using wordpress, moodle, etc.). I'm Looking for the admin page, and I see "Cache full count" is growing rapidly. I've tried increasing the value of "apc.shm_size" reduce value "apc.ttl" increase the value "apc.shm_segments" but I can not resolve this issue. What am I doing wrong? I'm putting down some information: apc.ini: extension=apc.so apc.shm_size = 256 apc.enabled = 1 apc.ttl = 300 apc.user_ttl = 300 Ubuntu: 12.04 PHP: 5.3.10 APC: 3.1.7 Server has 16GB memory Limit share memory: 256MB

    Read the article

  • Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table

    - by Imagineer
    I'm getting the above mentioned error when backing up with ZRM, which is using mysqldump for backup. mysqldump --opt --extended-insert --single-transaction --create-options --default-character-set=utf8 --user=" " -p --all-databases "/nfs/backup/mysql01/dailyrun/20091216043001/backup.sql" mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table TICKET_ATTACHMENT at row: 2286 I have increased the size for 'max_allowed_packet' to be 1G in /etc/my.cnf which is the server setting and for the client side setting I've set it by running this command: mysql -u -p --max_allowed_packet=1G And I have verified that on the client and server side they are of the same value. This is to check the client side value according to this forum posting http://forums.mysql.com/read.php?35,75794,261640 mysql SELECT @@MAX_ALLOWED_PACKET - ; +----------------------+ | @@MAX_ALLOWED_PACKET | +----------------------+ | 1073741824 | +----------------------+ 1 row in set (0.00 sec) And this is the check the server value setting. mysql SHOW VARIABLES | max_allowed_packet | 1073741824 | I have ran out of ideas, and tried searching within expert exchange and googling for solutions but so far none has worked. Reference http://dev.mysql.com/doc/refman/5.1/en/packet-too-large.html Anyone please advise, thank you.

    Read the article

  • best practice with memcache/php - multi memcache nodes

    - by user62835
    So I am working on a web app - that has to be built for scalability. It stores frequent MySQL querys into the cache. I have pretty much everything built and ready to go - but I am concerned on best practices on handling where to cache the data. I've talked to a few people and one of them suggested to split each key/value across all the memcache nodes. Meaning if i store the example: 'somekey','this is the value' it will be split across lets say 3 memcache servers. Is that a better way? or is memcache more built on a 1 to 1 relationship?. For example. store value on server A till it faults out - go to server B and store there. that is my current understanding from the research I have done and past experience working with memcache. Could someone please point me in the right direction in this and let me know which way is best or if I completely have this mixxed up. Thanks

    Read the article

  • How do you get SharePoint back in sync when you change a user's sAMAccountName?

    - by Kirk Liemohn
    I have observed on SharePoint 2010 that if you change the sAMAccountName of a user after the user has logged into a SharePoint site collection, the tp_Login field in the UserInfo table does not get updated. It still has the old user Id. While the user can log into SharePoint under the new account, these new logins do not update the table. I have code that looks at the SPUser.LoginName and this value appears to be the tp_Login field value which is now old. The fact that this value is old causes my code to fail. I suspect this behavior is identical in SharePoint 2007. Is there any way to force SharePoint to recognize the new sAMAccountName? I suspect that profile synchronization might help, but I would like for my solution to work with WSS 3.0 and SharePoint 2010 Foundation. I considered manually updating the database table, but I would like to stick with supported approaches.

    Read the article

  • Centos does not open port/s after the rule/s are appended

    - by Charlie Dyason
    So after some battling and struggling with the firewall, i see that I may be doing something or the firewall isnt responding correctly there is has a port filter that is blocking certain ports. by the way, I have combed the internet, posted on forums, done almost everything and now hence the website name "serverfault", is my last resort, I need help What I hoped to achieve is create a pptp server to connect to with windows/linux clients UPDATED @ bottom Okay, here is what I did: I made some changes to my iptables file, giving me endless issues and so I restored the iptables.old file contents of iptables.old: # Firewall configuration written by system-config-firewall # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT after iptables.old restore(back to stock), nmap scan shows: nmap [server ip] Starting Nmap 6.00 ( nmap.org ) at 2013-11-01 13:54 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.014s latency). Not shown: 997 filtered ports PORT STATE SERVICE 22/tcp open ssh 113/tcp closed ident 8008/tcp open http Nmap done: 1 IP address (1 host up) scanned in 4.95 seconds if I append rule: (to accept all tcp ports incoming to server on interface eth0) iptables -A INPUT -i eth0 -m tcp -j ACCEPT nmap output: nmap [server ip] Starting Nmap 6.00 ( nmap.org ) at 2013-11-01 13:58 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.017s latency). Not shown: 858 filtered ports, 139 closed ports PORT STATE SERVICE 22/tcp open ssh 443/tcp open https 8008/tcp open http Nmap done: 1 IP address (1 host up) scanned in 3.77 seconds *notice it allows and opens port 443 but no other ports, and it removes port 113...? removing previous rule and if I append rule: (allow and open port 80 incoming to server on interface eth0) iptables -A INPUT -i eth0 -m tcp -p tcp --dport 80 -j ACCEPT nmap output: nmap [server ip] Starting Nmap 6.00 ( nmap.org ) at 2013-11-01 14:01 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.014s latency). Not shown: 996 filtered ports PORT STATE SERVICE 22/tcp open ssh 80/tcp closed http 113/tcp closed ident 8008/tcp open http Nmap done: 1 IP address (1 host up) scanned in 5.12 seconds *notice it removes port 443 and allows 80 but is closed without removing previous rule and if I append rule: (allow and open port 1723 incoming to server on interface eth0) iptables -A INPUT -i eth0 -m tcp -p tcp --dport 1723 -j ACCEPT nmap output: nmap [server ip] Starting Nmap 6.00 ( nmap.org ) at 2013-11-01 14:05 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.015s latency). Not shown: 996 filtered ports PORT STATE SERVICE 22/tcp open ssh 80/tcp closed http 113/tcp closed ident 8008/tcp open http Nmap done: 1 IP address (1 host up) scanned in 5.16 seconds *notice no change in ports opened or closed??? after removing rules: iptables -A INPUT -i eth0 -m tcp -p tcp --dport 80 -j ACCEPT iptables -A INPUT -i eth0 -m tcp -p tcp --dport 1723 -j ACCEPT nmap output: nmap [server ip] Starting Nmap 6.00 ( nmap.org ) at 2013-11-01 14:07 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.015s latency). Not shown: 998 filtered ports PORT STATE SERVICE 22/tcp open ssh 113/tcp closed ident Nmap done: 1 IP address (1 host up) scanned in 5.15 seconds and returning rule: (to accept all tcp ports incoming to server on interface eth0) iptables -A INPUT -i eth0 -m tcp -j ACCEPT nmap output: nmap [server ip] Starting Nmap 6.00 ( nmap.org ) at 2013-11-01 14:07 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.017s latency). Not shown: 858 filtered ports, 139 closed ports PORT STATE SERVICE 22/tcp open ssh 443/tcp open https 8008/tcp open http Nmap done: 1 IP address (1 host up) scanned in 3.87 seconds notice the eth0 changes the 999 filtered ports to 858 filtered ports, 139 closed ports QUESTION: why cant I allow and/or open a specific port, eg. I want to allow and open port 443, it doesnt allow it, or even 1723 for pptp, why am I not able to??? sorry for the layout, the editor was give issues (aswell... sigh) UPDATE @Madhatter comment #1 thank you madhatter in my iptables file: # Firewall configuration written by system-config-firewall # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i eth0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT # ----------all rules mentioned in post where added here ONLY!!!---------- -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT if I want to allow and open port 1723 (or edit iptables to allow a pptp connection from remote pc), what changes would I make? (please bear with me, my first time working with servers, etc.) Update MadHatter comment #2 iptables -L -n -v --line-numbers Chain INPUT (policy ACCEPT 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 9 660 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 3 0 0 ACCEPT all -- eth0 * 0.0.0.0/0 0.0.0.0/0 4 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 5 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 6 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT 6 packets, 840 bytes) num pkts bytes target prot opt in out source destination just on a personal note, madhatter, thank you for the support , I really appreciate it! UPDATE MadHatter comment #3 here are the interfaces ifconfig eth0 Link encap:Ethernet HWaddr 00:1D:D8:B7:1F:DC inet addr:[server ip] Bcast:[server ip x.x.x].255 Mask:255.255.255.0 inet6 addr: fe80::21d:d8ff:feb7:1fdc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:36692 errors:0 dropped:0 overruns:0 frame:0 TX packets:4247 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2830372 (2.6 MiB) TX bytes:427976 (417.9 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) remote nmap nmap -p 1723 [server ip] Starting Nmap 6.00 ( http://nmap.org ) at 2013-11-01 16:17 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.017s latency). PORT STATE SERVICE 1723/tcp filtered pptp Nmap done: 1 IP address (1 host up) scanned in 0.51 seconds local nmap nmap -p 1723 localhost Starting Nmap 5.51 ( http://nmap.org ) at 2013-11-01 16:19 SAST Nmap scan report for localhost (127.0.0.1) Host is up (0.000058s latency). Other addresses for localhost (not scanned): 127.0.0.1 PORT STATE SERVICE 1723/tcp open pptp Nmap done: 1 IP address (1 host up) scanned in 0.11 seconds UPDATE MadHatter COMMENT POST #4 I apologize, if there might have been any confusion, i did have the rule appended: (only after 3rd post) iptables -A INPUT -p tcp --dport 1723 -j ACCEPT netstat -apn|grep -w 1723 tcp 0 0 0.0.0.0:1723 0.0.0.0:* LISTEN 1142/pptpd There are not VPN's and firewalls between the server and "me" UPDATE MadHatter comment #5 So here is an intersting turn of events: I booted into windows 7, created a vpn connection, went through the verfication username & pword - checking the sstp then checking pptp (went through that very quickly which meeans there is no problem), but on teh verfication of username and pword (before registering pc on network), it got stuck, gave this error Connection failed with error 2147943625 The remote computer refused the network connection netstat -apn | grep -w 1723 before connecting: netstat -apn |grep -w 1723 tcp 0 0 0.0.0.0:1723 0.0.0.0:* LISTEN 1137/pptpd after the error came tried again: netstat -apn |grep -w 1723 tcp 0 0 0.0.0.0:1723 0.0.0.0:* LISTEN 1137/pptpd tcp 0 0 41.185.26.238:1723 41.13.212.47:49607 TIME_WAIT - I do not know what it means but seems like there is progress..., any thoughts???

    Read the article

  • What does the "Maximum Frequency" number mean in the Windows Resource Monitor?

    - by nhinkle
    In the Windows Resource Monitor's CPU tab, there is a status box and graph for the "Maximum Frequency", right next to the "CPU Usage" values. What does this mean? The value is sometimes over 100% on my system... what could that imply? By looking at CPU-z's real-time report of the processor's clock speed, it seems to be loosely related to what frequency the CPU is running at, which would imply that it means "percent of maximum possible frequency the CPU is running at"; this would be of relevance on systems with SpeedStep and/or TurboBoost technology (or similar). Furthermore, setting the system to "power saving mode" lowers the "maximum frequency" value to around 60%, while setting it to "high performance" mode sets it to around 110%. However, the percentage does not seem to exactly correlate to the CPU speed being shown. What value is this actually representing then?

    Read the article

  • DNS Problems (NIGHTMARES!) with BIND and Virtualmin

    - by Nyxynyx
    I have a webserver (Ubuntu 12.04 with LAMP) using Virtualmin / Webmin. Because I just moved from a Cpanel system, I am having a nightmare configuring the DNS! Using intoDNS.com, the failed reports are: Mismatched NS records WARNING: One or more of your nameservers did not return any of your NS records. DNS servers responded ERROR: One or more of your nameservers did not respond: The ones that did not respond are: 123.123.123.123 213.251.188.141x Multiple Nameservers ERROR: Looks like you have less than 2 nameservers. According to RFC2182 section 5 you must have at least 3 nameservers, and no more than 7. Having 2 nameservers is also ok by me. Missing nameservers reported by your nameserver You should already know that your NS records at your nameservers are missing, so here it is again: ns1.mydomain.com. sdns2.ovh.net. SOA record No valid SOA record came back! MX Records WWW A Record ERROR: I could not get any A records for www.mydomain.com! Step-by-Step of my Attempt In my domain registrar (Namecheap), I registered ns1.mydomain.com as a nameserver, pointing to the IP address of my web server which is running bind9. The domain is setup with DNS ns1.mydomain.com and sdns2.ovh.net. sdns2.ovh.net is a secondary DNS server (SLAVE and pointing mydomain.com to the IP address of my web server) Webserver domain: mydomain.com Webserver hostname: ns4000000.ip-123-123-123.net Webserver IP: 123.123.123.123 Under Virtualmin, I edited the default Virtual server template, BIND DNS records for new domains: ns1.mydomain.com Master DNS server hostname: ns1.mydomain.com Next I created a Virtual server using that server template. This is what I've done but its still not working! Any ideas? I've been stuck for days, thank you for all your help! service bind9 status * bind9 is running lsof -i :53 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME named 6966 bind 20u IPv6 338583 0t0 TCP *:domain (LISTEN) named 6966 bind 21u IPv4 338588 0t0 TCP localhost.localdomain:domain (LISTEN) named 6966 bind 22u IPv4 338590 0t0 TCP ns4000000.ip-123-123-123.net:domain (LISTEN) named 6966 bind 512u IPv6 338582 0t0 UDP *:domain named 6966 bind 513u IPv4 338587 0t0 UDP localhost.localdomain:domain named 6966 bind 514u IPv4 338589 0t0 UDP ns4000000.ip-123-123-123.net:domain /etc/resolv.con (Not sure how 213.186.33.99 got here) nameserver 127.0.0.1 nameserver 213.186.33.99 search ovh.net host 123.123.123.123 (my web server's IP) 13.60.245.198.in-addr.arpa domain name pointer ns4000000.ip-123-123-123.net. nslookup 213.186.33.99 Server: 127.0.0.1 Address: 127.0.0.1#53 Non-authoritative answer: 99.33.186.213.in-addr.arpa name = cdns.ovh.net. Authoritative answers can be found from: 33.186.213.in-addr.arpa nameserver = ns.ovh.net. 33.186.213.in-addr.arpa nameserver = dns.ovh.net. nslookup ns1.mydomain.com ;; Got SERVFAIL reply from 127.0.0.1, trying next server ;; connection timed out; no servers could be reached nslookup ns2.mydomain.com ;; Got SERVFAIL reply from 127.0.0.1, trying next server ;; connection timed out; no servers could be reached nslookup www.mydomain.com ;; Got SERVFAIL reply from 127.0.0.1, trying next server ;; connection timed out; no servers could be reached dig mydomain.com ; <<>> DiG 9.8.1-P1 <<>> mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 43540 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;mydomain.com. IN A ;; Query time: 0 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 11 11:30:09 2012 ;; MSG SIZE rcvd: 30 dig ns1.mydomain.com ; <<>> DiG 9.8.1-P1 <<>> ns1.mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 31254 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.mydomain.com. IN A ;; Query time: 0 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 11 11:30:16 2012 ;; MSG SIZE rcvd: 34 /etc/bind/named.conf include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones"; /etc/bind/named.conf.default-zones zone "." { type hint; file "/etc/bind/db.root"; }; zone "localhost" { type master; file "/etc/bind/db.local"; }; zone "127.in-addr.arpa" { type master; file "/etc/bind/db.127"; }; zone "0.in-addr.arpa" { type master; file "/etc/bind/db.0"; }; zone "255.in-addr.arpa" { type master; file "/etc/bind/db.255"; }; /etc/bind/named.conf.local zone "mydomain.com" { type master; file "/var/lib/bind/mydomain.com.hosts"; allow-transfer { 127.0.0.1; localnets; }; }; /etc/bind/named.conf.options options { directory "/var/cache/bind"; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; // allow-recursion { 127.0.0.1; }; // transfer-source; }; named-checkconf -z dns_master_load: /var/lib/bind/mydomain.com.hosts:21: unexpected end of line dns_master_load: /var/lib/bind/mydomain.com.hosts:20: unexpected end of input /var/lib/bind/mydomain.com.hosts: file does not end with newline zone mydomain.com/IN: loading from master file /var/lib/bind/mydomain.com.hosts failed: unexpected end of input zone mydomain.com/IN: not loaded due to errors. _default/mydomain.com/IN: unexpected end of input zone localhost/IN: loaded serial 2 zone 127.in-addr.arpa/IN: loaded serial 1 zone 0.in-addr.arpa/IN: loaded serial 1 zone 255.in-addr.arpa/IN: loaded serial 1 iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:20000 ACCEPT tcp -- anywhere anywhere tcp dpt:webmin ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:imaps ACCEPT tcp -- anywhere anywhere tcp dpt:imap2 ACCEPT tcp -- anywhere anywhere tcp dpt:pop3s ACCEPT tcp -- anywhere anywhere tcp dpt:pop3 ACCEPT tcp -- anywhere anywhere tcp dpt:ftp-data ACCEPT tcp -- anywhere anywhere tcp dpt:ftp ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:submission ACCEPT tcp -- anywhere anywhere tcp dpt:smtp ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Error attempting to log into Redmine through IIS 7.5 Reverse Proxy

    - by dneaster3
    I am trying to set up Redmine as a subdirectory of our department's intranet site, and also to rebrand it as "Workflow" using IIS's URL Rewrite extension. I have it "working" in that it will serve the page with all the correct rewrites in both the URL and the HTML code. However, when I try to submit a form (including logging in to redmine), IIS gives me one of the the following errors: Your browser sent a request that this server could not understand. or The specified CGI application encountered an error and the server terminated the process. Here's the setup: Redmine installed on a local Windows XP machine using the Bitnami all-in-one installer, which includes: Apache 2 Ruby-on-Rails MySQL Redmine Thin Redmine runs locally at http:/localhost/redmine Redmine runs over the intranet http:/146.18.236.xxx/redmine Windows Server + IIS 7.5 serving up an ASP.NET intranet web application mydept.mycompany.com IIS Extensions Url Rewrite and AAR installed Reverse proxy settings for IIS (shown below) to serve Redmine at mydept.mycompany.com/workflow <rewrite> <rules> <rule name="Route requests for workflow to redmine server" stopProcessing="true"> <match url="^workflow/?(.*)" /> <conditions> <add input="{CACHE_URL}" pattern="^(https?)://" /> </conditions> <action type="Rewrite" url="{C:1}://146.18.236.xxx/redmine/{R:1}" logRewrittenUrl="true" /> <serverVariables> <set name="HTTP_ACCEPT_ENCODING" value="" /> <set name="ORIGINAL_HOST" value="{HTTP_HOST}" /> </serverVariables> </rule> </rules> <outboundRules rewriteBeforeCache="true"> <clear /> <preConditions> <preCondition name="isHTML" logicalGrouping="MatchAny"> <add input="{RESPONSE_CONTENT_TYPE}" pattern="^text/html" /> <add input="{RESPONSE_CONTENT_TYPE}" pattern="^text/plain" /> <add input="{RESPONSE_CONTENT_TYPE}" pattern="^application/.*xml" /> </preCondition> <preCondition name="isRedirection"> <add input="{RESPONSE_STATUS}" pattern="3\d\d" /> </preCondition> </preConditions> <rule name="Rewrite outbound relative URLs in tags" preCondition="isHTML"> <match filterByTags="A, Area, Base, Form, Frame, Head, IFrame, Img, Input, Link, Script" pattern="^/redmine/(.*)" /> <action type="Rewrite" value="/workflow/{R:1}" /> </rule> <rule name="Rewrite outbound absolute URLs in tags" preCondition="isHTML"> <match filterByTags="A, Area, Base, Form, Frame, Head, IFrame, Img, Input, Link, Script" pattern="^(https?)://146.18.236.xxx/redmine/(.*)" /> <action type="Rewrite" value="{R:1}://mydept.mycompany.com/workflow/{R:2}" /> </rule> <rule name="Rewrite tags with hypenated properties missed by IIS bug" preCondition="isHTML"> <!-- http://forums.iis.net/t/1200916.aspx --> <match filterByTags="None" customTags="" pattern="(\baction=&quot;|\bsrc=&quot;|\bhref=&quot;)/redmine/(.*?)(&quot;)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="true" /> <action type="Rewrite" value="{R:1}/workflow/{R:2}{R:3}" /> </rule> <rule name="Rewrite Location Header" preCondition="isRedirection"> <match serverVariable="RESPONSE_LOCATION" pattern="^http://[^/]+/(.*)" /> <conditions> <add input="{ORIGINAL_URL}" pattern=".+" /> <add input="{URL}" pattern="^/(workflow|redmine)/.*" /> </conditions> <action type="Rewrite" value="http://{ORIGINAL_URL}/{C:1}/{R:1}" /> </rule> </outboundRules> </rewrite> <urlCompression dynamicCompressionBeforeCache="false" /> Any help that you can provide would be appreciated. I get the impression that I'm close adn that it is just one little setting here or there, but I can't seem to make it work.

    Read the article

  • about crusher in stone,building

    - by sbmxuancao1221
    SBM has formed a whole production chain with main products: crushing machinery, grinding machinery, and auxiliary products: vibrating screen, vibrating feeder and other associated equipments. Products cover more than 20 models of 3 major series: mill series, crushing series, and sand making series.

    Read the article

  • How to input 64-bit hex values in octave

    - by Chris Ashton
    I'm trying to use Octave as a programmer's calculator. I want to input a 64-bit pointer, but when I do apparently the 64-bit value gets silently truncated to 32-bit: octave:44> base_ptr=0x1010101020202020 base_ptr = 538976288 octave:45> uint64(base_ptr) ans = 538976288 octave:46> printf("%lx\n", base_ptr) 20202020 So it seems like it's truncated the input value to the low 32-bits. I would use scanf, but the docs say it should only be used internally. How can I input the full 64-bit value? Alternately, is there some awesome free programmer's calculator out there for Windows? (I know Windows calculator has a programmer's mode but I would like arbitrary variable support). I tried using my ti-89 but it also doesn't support 64-bit hex.

    Read the article

  • Refering to another field in a form and return entries based on the filed

    - by Claus Machholdt
    Structure of DB is: Org_Year(Table) ID Org_Name_ID Org_Year Ft(Table) ID Org_Year_ID Count Org_Name(Table) ID Org_Name I've created a form to input data into Ft. Form has reference to Org_name. I should be able to choose between different org first. Afterwards i want to choose which year to enter data into Ft for. I only want to be presented with a list of Years according to the entries in the Org_Year table, where org matches my selection in the dropdown above. The query to populate the select box (Org_year_Box) is: SELECT Org_Year FROM Org_Year WHERE Org_Name_ID=Organisation_Name_ID.value; I't doesn't return the Years for the given Org_id when using the above query. But if i replace "Organisation_Name_ID.value" with the actual value ie. "2" it returns the correct years. How to do?

    Read the article

  • In Nginx, can I handle both a location:url or a content-type: text/html response from memcached?

    - by Sean Foo
    I'm setting up an nginx - apache reverse proxy where nginx handles the static files and apache the dynamic. I have a search engine and depending on search parameter I either directly forward the user to the page they are looking for or provide a set of search results. I cache these results in memcached as key:/search.cgi?q=foo value: LOCATION:http://www.example.com/foo.html and key:/search.cgi?q=bar value: CONTENT-TYPE: text/html <html> .... .... </html> I can pull the "Content-type...." values out of memcached using nginx and send them to the user, but I can't quite figure out how to handle a returned value like "Location..." Can I?

    Read the article

< Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >