Search Results

Search found 16772 results on 671 pages for 'charles long'.

Page 126/671 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Doesn't Matlab optimize the following?

    - by kloop
    I have a very long vector 1xr v, and a very long vector w 1xs, and a matrix A rxs, which is sparse (but very big in dimensions). I was expecting the following to be optimized by Matlab so I won't run into trouble with memory: A./(v'*w) but it seems like Matlab is actually trying to generate the full v'*w matrix, because I am running into Out of memory issue. Is there a way to overcome this? Note that there is no need to calculate all v'*w because many values of A are 0. EDIT: If that were possible, one way to do it would be to do A(find(A)) ./ (v'*w)(find(A)); but you can't select a subset of a matrix (v'*w in this case) without first calculating it and putting it in a variable.

    Read the article

  • JPA Native Query (SQL View)

    - by Uchenna
    I have two Entities Customer and Account. @Entity @Table(name="customer") public class Customer { private Long id; private String name; private String accountType; private String accountName; ... } @Entity @Table(name="account") public class Account { private Long id; private String accountName; private String accountType; ... } i have a an sql query select a.id as account_id, a.account_name, a.account_type, d.id, d.name from account a, customer d Assumption account and customer tables are created during application startup. accountType and accountName fields of Customer entity should not be created. That is, only id and name columns will be created. Question How do i run the above sql query and return a Customer Entity Object with the accountType and accountName properties populated with sql query's account_name and account_type values. Thanks

    Read the article

  • What do you wish you could've learned sooner?

    - by Industrial
    What things, methods, workflows, etc. can you not live without today and wish you had learned of a long time ago? For example, learning some basic Ubuntu and using my debugger properly in the IDE have made a huge difference to me and are together probably the two things that I most wish I had done a long time ago. Using a debugger just seems like common sense now to many of us, but to those that are in a early stage of their career it might not. (I'm a good example of that.)

    Read the article

  • Creating a multi-row "table" as part of a SELECT

    - by Chad Birch
    I'm not really sure how to describe my question (thus the awful title), but it's related to this recent question. The problem would be easily solved if there was some way for me to create a "table" with 4 rows as part of my SELECT (to use with NOT IN or MINUS). What I mean is, I can do this: SELECT 1, 2, 3, 4; And will receive one row from the database: | 1 | 2 | 3 | 4 | But is there any way to receive the following (without using UNION, I don't really want a query that's potentially thousands of lines long with a long list)? | 1 | | 2 | | 3 | | 4 |

    Read the article

  • Changing CCK content-types details results in numerous DB calls for the menu system

    - by Paul Strugger
    Every time I make a change in the details of a content-type it takes too long. I though it had to do with the fact that I had too many content-types and fields (~500), but when I load the devel module to see the queries that take that long I see: Executed 32212 queries in 12267.57 milliseconds. Queries taking longer than 5 ms and queries executed more than once, are highlighted. Page execution time was 55763.32 ms When I see the details I notice that the vast majority of db calls come from the menu system, e.g.: _menu_route menu_local_tasks admin_menu_link_save Why is that? Can I avoid some of these? It doesn't seem logical!

    Read the article

  • Problem with Initializing Consts

    - by UdiM
    This code, when compiled in xlC 8.0 (on AIX 6.1), produces the wrong result. It should print 12345, but instead prints 804399880. Removing the const in front of result makes the code work correctly. Where is the bug? #include <stdio.h> #include <stdlib.h> #include <string> long int foo(std::string input) { return strtol(input.c_str(), NULL, 0); } void bar() { const long int result = foo("12345"); printf("%u\n", result); } int main() { bar(); return 0; } Compilation command: /usr/vacpp/bin/xlC example.cpp -g

    Read the article

  • Methods for breaking up content client-side into multiple "pages"

    - by Tom Genoni
    I have a long HTML text-only article formatted with paragraph tags. What I'd like to do is break this content into N number of divs so that I can create individual pages. So, for instance, on an iPad/iPhone, instead of reading one long page the user could swipe right/left to navigate to pages. My initial javascript attempts have been somewhat convoluted: creating an array of the text, measuring line-heights, device window heights, adding closing/opening paragraph tags and the end/beginning of pages. Thoughts on a good way to approach this? I will not have access to server-side processing, this has to be a client-side solution.

    Read the article

  • Mapping C structure to an XML element

    - by EFraim
    Suppose I have a structure in C or C++, such as: struct ConfigurableElement { int ID; char* strName; long prop1; long prop2; ... }; I would like to load/save it to/from the following XML element: <ConfigurableElement ID="1" strName="namedElem" prop1="2" prop2="3" ... /> Such a mapping can be trivially done in Java/C# or any other language with run-time reflection for the matter. Can it be done in any non-tedious way in C++ with macros/template trickery? Bonus points for handling nested structures/unions.

    Read the article

  • How do I "valueOf" an enum given a class name?

    - by stevemac
    Lets say I have a simple Enum called Animal defined as: public enum Animal { CAT, DOG } and I have a method like: private static Object valueOf(String value, Class<?> classType) { if (classType == String.class) { return value; } if (classType == Integer.class) { return Integer.parseInt(value); } if (classType == Long.class) { return Long.parseLong(value); } if (classType == Boolean.class) { return Boolean.parseBoolean(value); } // Enum resolution here } What can I put inside this method to return an instance of my enum where the value is of the classType? I have looked at trying: if (classType == Enum.class) { return Enum.valueOf((Class<Enum>)classType, value); } But that doesn't work.

    Read the article

  • android question about service and the method onstartcommand

    - by user516883
    In a service class there is a method to start the service. If that service gets done executing does it runs onstartcommand from the beginning? Is onstartcommand sorta like a loop as long as the service is running. For example i have onstartcommand { int x = 0; if(x == 0){ } else{ } } After that is complete does it run it again. If you know that answer please explain. I have read google explanation of services and it did not explain that part very well. Is onstartcommand sorta like a loop as long as the service is runnning

    Read the article

  • Memory alignment in C

    - by user1758245
    Here is a snippet: #pragma pack(4) struct s1 { char a; long b; }; #pragma pack() #pragma pack(2) struct s2 { char c; struct s1 st1; }; #pragma pack() #pragma pack(2) struct s3 { char a; long b; }; #pragma pack() #pragma pack(4) struct s4 { char c; struct s3 st3; }; #pragma pack() I though sizeof(s4) should be 10 or 12. But it turns out to be 8. I am using Visual C++ 6.0. Could someone tell me why?

    Read the article

  • How to allow three optional parameters in the URL by .htaccess?

    - by eij
    I have http://example.com and a PHP routing class that checks if some URL exists. I want to make a new route, which is: http://example.com/foo/bar/123 but as long as I open it, the Apache redirects me to an error page. So I'm using a .htaccess. The code is: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*) /index.php [L] and it works, as long as I use http://example.com/foo, but once I add some other parameters, it redirects me to an error. I'm guessing that the rewrite code is wrong. Is it wrong? If yes, could you suggest me the good one? If no, where the problem could be located?

    Read the article

  • Rails: How to preload a message to user

    - by Michael
    I have a Rails based prelaunch site that has some rotating background images (which are important for selling the idea of the site) that are taking too long to load, such that the users are leaving the site before they load. The only thing they're seeing is the email submission box. What's a good way to show a message to the users that the site will take some time to load but then have that message disappear after a reasonable period of time. I'm guessing a jQuery fadeOut() with a timer, but I'm not sure how long to set the timer for, because I'm not sure at what time it would start counting. Any suggestions?

    Read the article

  • Is there a way to substr a value returned by toShortString()?

    - by Jym Khana
    I am working with openlayers and I can get a point on a map but I can't get the individual coords. feat = drawLayer.features[0]; var geom = feat.geometry; var loca = geom.toShortString(); var long = loc.substr(0,9); alert(geom.toShortString());//returns the correct coords in xx.xxx,xx.xxx format alert(loca);//returns 2 very large numbers in xx.xxx,xx.xxx format alert(long);//returns the first, incorrect number What exaclty am I doing wrong and how can I correct it? Thanks

    Read the article

  • Create div tag template and reuse

    - by user1683645
    Is it possible to create a template e.g with lots of other elements inside it with proper attribute "tagging" and reuse it with jquery? For instance when you want to display user submitted comments without refreshing the page. The reason I ask this is because the code between the div tags are rather long. So using for instance prepend() would be to long to rewrite. Whats the best approach for larger manipulations? Create a separate html? Im pretty new to manipulation, but since I have a programming background i would expect that there is an efficient way to reuse already existing HTML instead of redefining it in jquery.

    Read the article

  • AMD 24 core server memory bandwidth

    - by ntherning
    I need some help to determine whether the memory bandwidth I'm seeing under Linux on my server is normal or not. Here's the server spec: HP ProLiant DL165 G7 2x AMD Opteron 6164 HE 12-Core 40 GB RAM (10 x 4GB DDR1333) Debian 6.0 Using mbw on this server I get the following numbers: foo1:~# mbw -n 3 1024 Long uses 8 bytes. Allocating 2*134217728 elements = 2147483648 bytes of memory. Using 262144 bytes as blocks for memcpy block copy test. Getting down to business... Doing 3 runs per test. 0 Method: MEMCPY Elapsed: 0.58047 MiB: 1024.00000 Copy: 1764.082 MiB/s 1 Method: MEMCPY Elapsed: 0.58012 MiB: 1024.00000 Copy: 1765.152 MiB/s 2 Method: MEMCPY Elapsed: 0.58010 MiB: 1024.00000 Copy: 1765.201 MiB/s AVG Method: MEMCPY Elapsed: 0.58023 MiB: 1024.00000 Copy: 1764.811 MiB/s 0 Method: DUMB Elapsed: 0.36174 MiB: 1024.00000 Copy: 2830.778 MiB/s 1 Method: DUMB Elapsed: 0.35869 MiB: 1024.00000 Copy: 2854.817 MiB/s 2 Method: DUMB Elapsed: 0.35848 MiB: 1024.00000 Copy: 2856.481 MiB/s AVG Method: DUMB Elapsed: 0.35964 MiB: 1024.00000 Copy: 2847.310 MiB/s 0 Method: MCBLOCK Elapsed: 0.23546 MiB: 1024.00000 Copy: 4348.860 MiB/s 1 Method: MCBLOCK Elapsed: 0.23544 MiB: 1024.00000 Copy: 4349.230 MiB/s 2 Method: MCBLOCK Elapsed: 0.23544 MiB: 1024.00000 Copy: 4349.359 MiB/s AVG Method: MCBLOCK Elapsed: 0.23545 MiB: 1024.00000 Copy: 4349.149 MiB/s On one of my other servers (based on Intel Xeon E3-1270): foo2:~# mbw -n 3 1024 Long uses 8 bytes. Allocating 2*134217728 elements = 2147483648 bytes of memory. Using 262144 bytes as blocks for memcpy block copy test. Getting down to business... Doing 3 runs per test. 0 Method: MEMCPY Elapsed: 0.18960 MiB: 1024.00000 Copy: 5400.901 MiB/s 1 Method: MEMCPY Elapsed: 0.18922 MiB: 1024.00000 Copy: 5411.690 MiB/s 2 Method: MEMCPY Elapsed: 0.18944 MiB: 1024.00000 Copy: 5405.491 MiB/s AVG Method: MEMCPY Elapsed: 0.18942 MiB: 1024.00000 Copy: 5406.024 MiB/s 0 Method: DUMB Elapsed: 0.14838 MiB: 1024.00000 Copy: 6901.200 MiB/s 1 Method: DUMB Elapsed: 0.14818 MiB: 1024.00000 Copy: 6910.561 MiB/s 2 Method: DUMB Elapsed: 0.14820 MiB: 1024.00000 Copy: 6909.628 MiB/s AVG Method: DUMB Elapsed: 0.14825 MiB: 1024.00000 Copy: 6907.127 MiB/s 0 Method: MCBLOCK Elapsed: 0.04362 MiB: 1024.00000 Copy: 23477.623 MiB/s 1 Method: MCBLOCK Elapsed: 0.04262 MiB: 1024.00000 Copy: 24025.151 MiB/s 2 Method: MCBLOCK Elapsed: 0.04258 MiB: 1024.00000 Copy: 24048.849 MiB/s AVG Method: MCBLOCK Elapsed: 0.04294 MiB: 1024.00000 Copy: 23847.599 MiB/s For reference here's what I get on my Intel based laptop: laptop:~$ mbw -n 3 1024 Long uses 8 bytes. Allocating 2*134217728 elements = 2147483648 bytes of memory. Using 262144 bytes as blocks for memcpy block copy test. Getting down to business... Doing 3 runs per test. 0 Method: MEMCPY Elapsed: 0.40566 MiB: 1024.00000 Copy: 2524.269 MiB/s 1 Method: MEMCPY Elapsed: 0.38458 MiB: 1024.00000 Copy: 2662.638 MiB/s 2 Method: MEMCPY Elapsed: 0.38876 MiB: 1024.00000 Copy: 2634.043 MiB/s AVG Method: MEMCPY Elapsed: 0.39300 MiB: 1024.00000 Copy: 2605.600 MiB/s 0 Method: DUMB Elapsed: 0.30707 MiB: 1024.00000 Copy: 3334.745 MiB/s 1 Method: DUMB Elapsed: 0.30425 MiB: 1024.00000 Copy: 3365.653 MiB/s 2 Method: DUMB Elapsed: 0.30342 MiB: 1024.00000 Copy: 3374.849 MiB/s AVG Method: DUMB Elapsed: 0.30491 MiB: 1024.00000 Copy: 3358.328 MiB/s 0 Method: MCBLOCK Elapsed: 0.07875 MiB: 1024.00000 Copy: 13003.670 MiB/s 1 Method: MCBLOCK Elapsed: 0.08374 MiB: 1024.00000 Copy: 12228.034 MiB/s 2 Method: MCBLOCK Elapsed: 0.07635 MiB: 1024.00000 Copy: 13411.216 MiB/s AVG Method: MCBLOCK Elapsed: 0.07961 MiB: 1024.00000 Copy: 12862.006 MiB/s So according to mbw my laptop is 3 times faster than the server!!! Please help me explain this. I've also tried to mount a ram disk and use dd to benchmark it and I get similar differences so I don't think mbw is to blame. I've checked the BIOS settings and the memory seem to be running at full speed. According to the hosting company the modules are all OK. Could this have something to do with NUMA? It seems like Node Interleaving is disabled on this server. Will enabling it (thus turning off NUMA) make a difference? foo1:~# numactl --hardware available: 4 nodes (0-3) node 0 cpus: 0 1 2 3 4 5 node 0 size: 8190 MB node 0 free: 7898 MB node 1 cpus: 6 7 8 9 10 11 node 1 size: 12288 MB node 1 free: 12073 MB node 2 cpus: 18 19 20 21 22 23 node 2 size: 12288 MB node 2 free: 12034 MB node 3 cpus: 12 13 14 15 16 17 node 3 size: 8192 MB node 3 free: 8032 MB node distances: node 0 1 2 3 0: 10 20 20 20 1: 20 10 20 20 2: 20 20 10 20 3: 20 20 20 10

    Read the article

  • Weird Network Behavior of Home Router

    - by Stilgar
    First of all I would like to apologize because what you are going to read will be long and confusing but I am fighting this issue for 3 days now and am out of ideas. At home I have the following setup 50Mbps Internet connects into a home router A 2 desktop computers connect to router A via standard FTP LAN cables including one where the cable is ~20m long. a second router B connects to router A via standard FTP LAN cable X (~20m long). several devices connect to the wireless network of router B and there are a couple of desktop computers connected to it through FTP LAN cables. For some reason computers connected to router B when it is connected via cable X have very slow Internet connection. It is like 5 times slower than what is expected. This is the actual problem I am trying to solve. Interesting facts If a computer is connected to cable X directly instead of through router B the Internet speed is just fine (up to the 50Mbps I get from the ISP). Tested with two computers. I have tried replacing router B with another router C and the problem persists. If I connect router B via another cable to the same ports with the same settings everything seems to work fine and computers connected to router B have quite fast Internet I have tested mainly via Speedtest.net but I have also achieved similar speeds when downloading a file The upload speed is quite higher than the download speed in all cases. Note that my ISP usually has higher upload speed (unless it manages to hit the 50Mbps cap) It seems like the speed when connecting through router B with cable X is reduced 4-5 times no matter what the original speed is. For example via router B I get 10Mbps speed to local servers where I get 50Mbps when connected on router A. If I use a distant server where the ISP is only able to provide 25Mbps I get 4-5Mbps on router B. WiFi is slower than LAN on both routers (which is normal) but the reduced speed is reduced proportionally for WiFi. In addition the upload speed is normally higher from the ISP and it is also reduced proportionally. I have tried two different network configurations. One where I have NAT behind NAT where router B connects to router A via the WAN port and has its own DHCP. Second where router B connects to router A via standard LAN port and has DHCP disabled. In this configuration router B serves as a switch and the Network Gateway for computers connected to router B is the internal IP address of router A. Both configurations work just fine but both manifest the reduced speed issue. pings seem to work just fine As far as I can tell none of the cables is crossed The RJ45 setup for cable X orange orange-white brown brow-white blue blue-white green green-white This is a big problem for me since cable X passes through walls and floors and is very hard to replace. I also may have gotten some of the facts wrong because I am almost going crazy with this issue and testing includes going several floors up and down the staircase. One hypothesis I came up with is that the cable is defective in such a way that the voltage from the router affects its performance. When it is connected to a computer it performs just fine but the router has less power. Related hypothesis includes the cable being affected by electricity cables in the walls when the voltage is low. (I know nothing about electricity) So any ideas what to do, what to test or what the issue may be?

    Read the article

  • Is the Cloud ready for an Enterprise Java web application? Seeking a JEE hosting advice.

    - by Jakub Holý
    Greetings to all the smart people around here! I'd like to ask whether it is feasible or a good idea at all to deploy a Java enterprise web application to a Cloud such as Amazon EC2. More exactly, I'm looking for infrastructure options for an application that shall handle few hundred users with long but neither CPU nor memory intensive sessions. I'm considering dedicated servers, virtual private servers (VPSs) and EC2. I've noticed that there is a project called JBoss Cloud so people are working on enabling such a deployment, on the other hand it doesn't seem to be mature yet and I'm not sure that the cloud is ready for this kind of applications, which differs from the typical cloud-based applications like Twitter. Would you recommend to deploy it to the cloud? What are the pros and cons? The application is a Java EE 5 web application whose main function is to enable users to compose their own customized Product by combining the available Parts. It uses stateless and stateful session beans and JPA for persistence of entities to a RDBMS and fetches information about Parts from the company's inventory system via a web service. Aside of external users it's used also by few internal ones, who are authenticated against the company's LDAP. The application should handle around 300-400 concurrent users building their product and should be reasonably scalable and available though these qualities are only of a medium importance at this stage. I've proposed an architecture consisting of a firewall (FW) and load balancer supporting sticky sessions and https (in the Cloud this would be replaced with EC2's Elastic Load Balancing service and FW on the app. servers, in a physical architecture the load-balancer would be a HW), then two physical clustered application servers combined with web servers (so that if one fails, a user doesn't loose his/her long built product) and finally a database server. The DB server would need a slave backup instance that can replace the master instance if it fails. This should provide reasonable availability and fault tolerance and provide good scalability as long as a single RDBMS can keep with the load, which should be OK for quite a while because most of the operations are done in the memory using a stateful bean and only occasionally stored or retrieved from the DB and the amount of data is low too. A problematic part could be the dependency on the remote inventory system webservice but with good caching of its outputs in the application it should be OK too. Unfortunately I've only vague idea of the system resources (memory size, number and speed of CPUs/cores) that such an "average Java EE application" for few hundred users needs. My rough and mostly unfounded estimate based on actual Amazon offerings is that 1.7GB and a single, 2-core "modern CPU" with speed around 2.5GHz (the High-CPU Medium Instance) should be sufficient for any of the two application servers (since we can handle higher load by provisioning more of them). Alternatively I would consider using the Large instance (64b, 7.5GB RAM, 2 cores at 1GHz) So my question is whether such a deployment to the cloud is technically and financially feasible or whether dedicated/VPS servers would be a better option and whether there are some real-world experiences with something similar. Thank you very much! /Jakub Holy PS: I've found the JBoss EAP in a Cloud Case Study that shows that it is possible to deploy a real-world Java EE application to the EC2 cloud but unfortunately there're no details regarding topology, instance types, or anything :-(

    Read the article

  • Postfix sasl login failing no mechanism found

    - by Nat45928
    following the link here: http://flurdy.com/docs/postfix/ with posfix, courier, MySql, and sasl gave me a web server that has imap functionality working fine but when i go to log into the server to send a message using the same user id and password for connecting the the imap server it rejects my login to the smtp server. If i do not specify a login for the outgoing mail server then it will send the message just fine. the error in postfix's log is: Jul 6 17:26:10 Sj-Linux postfix/smtpd[19139]: connect from unknown[10.0.0.50] Jul 6 17:26:10 Sj-Linux postfix/smtpd[19139]: warning: SASL authentication failure: unable to canonify user and get auxprops Jul 6 17:26:10 Sj-Linux postfix/smtpd[19139]: warning: unknown[10.0.0.50]: SASL DIGEST-MD5 authentication failed: no mechanism available Jul 6 17:26:10 Sj-Linux postfix/smtpd[19139]: warning: unknown[10.0.0.50]: SASL LOGIN authentication failed: no mechanism available Ive checked all usernames and passwords for mysql. what could be going wrong? edit: here is some other information: installed libraires for postfix, courier and sasl: aptitude install postfix postfix-mysql aptitude install libsasl2-modules libsasl2-modules-sql libgsasl7 libauthen-sasl-cyrus-perl sasl2-bin libpam-mysql aptitude install courier-base courier-authdaemon courier-authlib-mysql courier-imap courier-imap-ssl courier-ssl and here is my /etc/postfix/main.cf myorigin = domain.com smtpd_banner = $myhostname ESMTP $mail_name biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. #myhostname = my hostname alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname local_recipient_maps = mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all mynetworks_style = host # how long if undelivered before sending warning update to sender delay_warning_time = 4h # will it be a permanent error or temporary unknown_local_recipient_reject_code = 450 # how long to keep message on queue before return as failed. # some have 3 days, I have 16 days as I am backup server for some people # whom go on holiday with their server switched off. maximal_queue_lifetime = 7d # max and min time in seconds between retries if connection failed minimal_backoff_time = 1000s maximal_backoff_time = 8000s # how long to wait when servers connect before receiving rest of data smtp_helo_timeout = 60s # how many address can be used in one message. # effective stopper to mass spammers, accidental copy in whole address list # but may restrict intentional mail shots. # but may restrict intentional mail shots. smtpd_recipient_limit = 16 # how many error before back off. smtpd_soft_error_limit = 3 # how many max errors before blocking it. smtpd_hard_error_limit = 12 # Requirements for the HELO statement smtpd_helo_restrictions = permit_mynetworks, permit # Requirements for the sender details smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit # Requirements for the connecting server smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org # Requirement for the recipient address smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_data_restrictions = reject_unauth_pipelining # require proper helo at connections smtpd_helo_required = yes # waste spammers time before rejecting them smtpd_delay_reject = yes disable_vrfy_command = yes # not sure of the difference of the next two # but they are needed for local aliasing alias_maps = hash:/etc/postfix/aliases alias_database = hash:/etc/postfix/aliases # this specifies where the virtual mailbox folders will be located virtual_mailbox_base = /var/spool/mail/virtual # this is for the mailbox location for each user virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf # and this is for aliases virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf # and this is for domain lookups virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf # this is how to connect to the domains (all virtual, but the option is there) # not used yet # transport_maps = mysql:/etc/postfix/mysql_transport.cf virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 # SASL smtpd_sasl_auth_enable = yes # If your potential clients use Outlook Express or other older clients # this needs to be set to yes broken_sasl_auth_clients = yes smtpd_sasl_security_options = noanonymous smtpd_sasl_local_domain =

    Read the article

  • IP Micro-outages, telephone micro-outages, and CATV micro-outages

    - by Michael Graff
    This is a long and complicated question, mostly because it has been going on for 2.5 years without a solution in sight. It also is only one-third computer related, the other two-thirds are cable TV and cable-phone related. Background I have COX Communications for a cable provider, and we get Internet, digital cable TV, and digital phone service through them. The Internet is a SB5101 right now, and has been a DPC2100 and SB5120 in the past. Same results. The phone service is provided through a telephone interface mounted on the outside of the house (not classic VoIP) and the CATV is through a Scientific Atlanta receiver without DVR. I do have a TiVo connected to the CATV box. Symptoms The CATV shows "blocking" -- sometimes very very short duration where a few blocks appear on the screen. Sometimes it lasts long enough that the video "pauses" for 2-5 seconds, and rarely but not unseen the audio also fails. The CATV decoder box shows no correctable (FEC) or uncorrectable errors. That is, all BER counters are zero for the video stream. The Internet shows "micro-outages" where it appears that sent packets are not making it out, but I continue to receive packets from local modems. That is, pings stop coming back, but I continue to see modems broadcast for DHCP, and sometimes they ask more than once. The cable modem shows no errors during this time, but cable modems lie like you would not believe. It is actually possible to unplug the coax from the modem for 20 seconds and it reports NO ERRORS to the provider's tools. The phone service cuts out for 1-3 seconds, infrequently. When this happens, I hear NOTHING (not even comfort noise) and the remote side hears a "click" as if I were getting a call waiting message. However, there is no call incoming, other than the one I'm currently on of course. Things SEEM to happen more frequently when the temperature outside swings from cold to warm, so fall/spring seems worse than summer/winter. All micro-outages occur between once or twice a day (which I could ignore) to 10 times per hour. All SNR, signal levels, noise levels, etc. show very close to optimal when measured. COX's diagnosis This is a continual pain for me. Over the last 2.5 years, they have opened, "fixed" something, and closed the tickets. They close it without confirming that it is indeed better, and when I reopen they cannot do that, but instead they open a new ticket and send yet another low-level tech out to do the same signal tests and report that all is OK. I've finally gotten a line tech who has a clue and is motivated enough to pursue this with me. We have tried things like switching the local nodes over to UPS and generator power, but this does not trigger the noise. We have tried replacing all cabling, the tap outside my house, the modem, the CATV decoder -- all without resolution. Recently they have decided it is both my computer or switch, my TiVo, and my phone that are all broken and causing this issue. My debugging steps I spent the worse day of my TV-watching life yesterday and part of today. I watched live TV without the TiVo. I witnessed blocking, but it did "feel different." and was actually more severe. Some days it is better, some days it is worse, so perhaps this was just a very bad day. Today, I connected the TiVo to my DVD player, and ran two very long movies through it. I saw no blocking at all during nearly 6 hours of video. Suggestions? Does anyone have any suggestions on what to do next? I understand perhaps only the IP side can be addressed here, but it is one of the more limiting debugging options.

    Read the article

  • Pushing DNSSEC updates with offline keys

    - by eggyal
    In a non-professional capacity, I look after the DNS of some 18 domains: mostly personal/vanity domains for immediate family. I outsource the whole shebang to an inexpensive managed hosting provider with a web interface through which I manage the zones; since the provider also offers DNSSEC, I have successfully deployed that too. These domains are so unimportant that an attack targetted against them seems much less likely than a general compromise of my provider's systems, at which point the records of all their customers might be changed to misdirect traffic (perhaps with extremely long TTLs). DNSSEC could protect against such an attack, but only if the zone's private keys are not held by the hosting provider. So, I wonder: how can one keep DNSSEC private keys offline yet still transfer signed zones to an outsourced DNS host? The most obvious answer (to me, at least) is to run one's own shadow/hidden master (from which the provider can slave) and then copy offline-signed zonefiles to the master as required. The problem is that the only machine I (want to*) control is my personal laptop, which usually connects from a typical home ADSL (behind NAT over a dynamically-assigned IP address). Having them slave from that (e.g. with a very long Expiry time on the zone for periods when my laptop is offline/unavailable) would not only require a Dynamic DNS record from which they can slave (if indeed they can slave from a named host rather than a static IP address), but would also involve me running a DNS server on my laptop and opening both it and my home network up to the incoming zone transfer requests: not ideal. I would prefer a much more push-oriented design, whereby my laptop initiates transfer of offline-signed zonefiles/updates to the provider's servers. I looked into whether nsupdate could fit the bill: documentation is a little sketchy, but my testing (with BIND 9.7) suggests it can indeed update DNSSEC zones, but only where the server holds the keys to perform the zone signing; I have not found a way to have it take an update including the relevant RRSIG/NSEC/etc. records and have the server accept them. Is this a supported use-case? If not, I suspect the only solutions which could fit the bill will involve non-DNS-based transfer of the zone updates and would welcome recommendations that are supported by (hopefully inexpensive) hosting providers: SFTP/SCP? rsync? RDBMS replication? Proprietary API? Finally, what would be the practical implications of such a setup? Key rotation is jumping out at me as being an obvious difficulty, especially if my laptop is offline for extended periods. But the zones are extremely stable, so perhaps I could get away with long-lived ZSKs**...? * Whilst I could run a shadow/hidden master on e.g. an outsourced VPS, I dislike the overhead of having to secure / manage / monitor / maintain yet another system; not to mention the additional financial costs of so doing. ** Okay, this would enable a concerted attacker to replay outdated records—but the risk and impact of such are both tolerable in the case of these domains.

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >