Search Results

Search found 2047 results on 82 pages for 'the dude man'.

Page 79/82 | < Previous Page | 75 76 77 78 79 80 81 82  | Next Page >

  • Following my passion

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} What makes you go the extra mile? What makes you move forward and be ambitious? My name is Alin Gheorghe and I am currently working as a Contracts Administrator in the Shared Service Centre in Bucharest, Romania. I have graduated from the Political Science Faculty of the National School of Political and Administrative Studies here in Bucharest and I am currently undergoing a Master Program on Security and Diplomacy at the same university. Although I have been working a full time job here at Oracle since January 2011 and also going to school after work, I am going to tell you how I spend my spare time and about my passion. I always thought that if one doesn’t have something that he would consider a passion it’s always just a matter of time until he would discover one. Looking back, I can tell you that I discovered mine when I was 14 years old and I remember watching a football game when suddenly I became fascinated by the “man in black” that all football players obeyed during the match. That year I attended and promoted a referee course within my local referee committee and about 6 months later I was delegated to my first official game at youth tournament. Almost 10 years have passed since then and I can tell you that I very much love and appreciate this activity that I have spent doing, each and every weekend, 9 months every year, acquiring more than 600 official games until now. And even if not having a real free weekend or holiday might be sound very consuming, I can say that having something I am passionate about helps me to keep myself balanced and happy while giving me an option to channel any stress or anxiety I may feel. I think it’s important to have something of your own besides work that you spend time and effort on. Whether it’s painting, writing or a sport, having a passion can only have a positive effect on your life. And as every extra thing, it’s not always easy to follow your passion, but is it worth it? Speaking from my own experience I am sure it is, and here are some tips and tricks I constantly use not to give up on my passion: Normal 0 false false false EN-US X-NONE X-NONE -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} No matter how much time you spend at work and how much credit you get for that, it will always be the passion related achievements that will comfort you more and boost your self esteem and nothing compares to that feeling you get. I always try to keep this in mind so that each time I think about giving up I get even more ambitious to move forward. Everybody can just do what they are paid to do or what they are requested to do at work but not everybody can go that extra mile when it comes to following their passion and putting in extra work for that. By exercising this constantly you get used to also applying this attitude on the work related tasks. It takes accurate planning, anticipation and forecasting in order to combine your work with your passion. Therefore having a full schedule and keeping up with it will only help develop and exercise such skills and also will prove to you that you are up to such a challenge. I always keep in mind as a final goal that if you get very good at your passion you can actually start earning from it. And I think that is the ultimate level when you can say that you make a living by doing exactly what you are passionate about. In conclusion, by taking the easy way not only do you miss out on something nice, but life’s priceless rewards are usually given by those things that you actually believe in and know how to stand up for over time.

    Read the article

  • Real tortoises keep it slow and steady. How about the backups?

    - by Maria Zakourdaev
      … Four tortoises were playing in the backyard when they decided they needed hibiscus flower snacks. They pooled their money and sent the smallest tortoise out to fetch the snacks. Two days passed and there was no sign of the tortoise. "You know, she is taking a lot of time", said one of the tortoises. A little voice from just out side the fence said, "If you are going to talk that way about me I won't go." Is it too much to request from the quite expensive 3rd party backup tool to be a way faster than the SQL server native backup? Or at least save a respectable amount of storage by producing a really smaller backup files?  By saying “really smaller”, I mean at least getting a file in half size. After Googling the internet in an attempt to understand what other “sql people” are using for database backups, I see that most people are using one of three tools which are the main players in SQL backup area:  LiteSpeed by Quest SQL Backup by Red Gate SQL Safe by Idera The feedbacks about those tools are truly emotional and happy. However, while reading the forums and blogs I have wondered, is it possible that many are accustomed to using the above tools since SQL 2000 and 2005.  This can easily be understood due to the fact that a 300GB database backup for instance, using regular a SQL 2005 backup statement would have run for about 3 hours and have produced ~150GB file (depending on the content, of course).  Then you take a 3rd party tool which performs the same backup in 30 minutes resulting in a 30GB file leaving you speechless, you run to management persuading them to buy it due to the fact that it is definitely worth the price. In addition to the increased speed and disk space savings you would also get backup file encryption and virtual restore -  features that are still missing from the SQL server. But in case you, as well as me, don’t need these additional features and only want a tool that performs a full backup MUCH faster AND produces a far smaller backup file (like the gain you observed back in SQL 2005 days) you will be quite disappointed. SQL Server backup compression feature has totally changed the market picture. Medium size database. Take a look at the table below, check out how my SQL server 2008 R2 compares to other tools when backing up a 300GB database. It appears that when talking about the backup speed, SQL 2008 R2 compresses and performs backup in similar overall times as all three other tools. 3rd party tools maximum compression level takes twice longer. Backup file gain is not that impressive, except the highest compression levels but the price that you pay is very high cpu load and much longer time. Only SQL Safe by Idera was quite fast with it’s maximum compression level but most of the run time have used 95% cpu on the server. Note that I have used two types of destination storage, SATA 11 disks and FC 53 disks and, obviously, on faster storage have got my backup ready in half time. Looking at the above results, should we spend money, bother with another layer of complexity and software middle-man for the medium sized databases? I’m definitely not going to do so.  Very large database As a next phase of this benchmark, I have moved to a 6 terabyte database which was actually my main backup target. Note, how multiple files usage enables the SQL Server backup operation to use parallel I/O and remarkably increases it’s speed, especially when the backup device is heavily striped. SQL Server supports a maximum of 64 backup devices for a single backup operation but the most speed is gained when using one file per CPU, in the case above 8 files for a 2 Quad CPU server. The impact of additional files is minimal.  However, SQLsafe doesn’t show any speed improvement between 4 files and 8 files. Of course, with such huge databases every half percent of the compression transforms into the noticeable numbers. Saving almost 470GB of space may turn the backup tool into quite valuable purchase. Still, the backup speed and high CPU are the variables that should be taken into the consideration. As for us, the backup speed is more critical than the storage and we cannot allow a production server to sustain 95% cpu for such a long time. Bottomline, 3rd party backup tool developers, we are waiting for some breakthrough release. There are a few unanswered questions, like the restore speed comparison between different tools and the impact of multiple backup files on restore operation. Stay tuned for the next benchmarks.    Benchmark server: SQL Server 2008 R2 sp1 2 Quad CPU Database location: NetApp FC 15K Aggregate 53 discs Backup statements: No matter how good that UI is, we need to run the backup tasks from inside of SQL Server Agent to make sure they are covered by our monitoring systems. I have used extended stored procedures (command line execution also is an option, I haven’t noticed any impact on the backup performance). SQL backup LiteSpeed SQL Backup SQL safe backup database <DBNAME> to disk= '\\<networkpath>\par1.bak' , disk= '\\<networkpath>\par2.bak', disk= '\\<networkpath>\par3.bak' with format, compression EXECUTE master.dbo.xp_backup_database @database = N'<DBName>', @backupname= N'<DBName> full backup', @desc = N'Test', @compressionlevel=8, @filename= N'\\<networkpath>\par1.bak', @filename= N'\\<networkpath>\par2.bak', @filename= N'\\<networkpath>\par3.bak', @init = 1 EXECUTE master.dbo.sqlbackup '-SQL "BACKUP DATABASE <DBNAME> TO DISK= ''\\<networkpath>\par1.sqb'', DISK= ''\\<networkpath>\par2.sqb'', DISK= ''\\<networkpath>\par3.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 4, INIT"' EXECUTE master.dbo.xp_ss_backup @database = 'UCMSDB', @filename = '\\<networkpath>\par1.bak', @backuptype = 'Full', @compressionlevel = 4, @backupfile = '\\<networkpath>\par2.bak', @backupfile = '\\<networkpath>\par3.bak' If you still insist on using 3rd party tools for the backups in your production environment with maximum compression level, you will definitely need to consider limiting cpu usage which will increase the backup operation time even more: RedGate : use THREADPRIORITY option ( values 0 – 6 ) LiteSpeed : use  @throttle ( percentage, like 70%) SQL safe :  the only thing I have found was @Threads option.   Yours, Maria

    Read the article

  • I Know What I Did This Summer: Put Down Trex Decking

    - by thatjeffsmith
    If you’re wondering why I would bore everyone with my pictures and frequent status updates/tweets from the past week – it’s so I could document the process of refurbishing my deck, or what some would call a porch. When we go to take a vacation, buy a car, do anything – we also read personal blogs to get the real story. So, if you’re curious about what it takes to tackle this sort of project, read on. Skills/Equipment/Manpower We Possessed I took the old decking out by myself. I’m about 230 lbs, more than 6′ tall, and I’m pretty healthy. This took about 8 hours over two afternoons. Three of us put the deck back together. My wife has two engineering degrees. Her father also has two engineering degrees. Lots of brainpower available here. Also, her dad ran the public works department for a country for more than 20 years – so lots and lots of practical experience on hand. We had a compound mitre saw, a skilsaw, 2-3 crowbars, a framing hammer, 3 cordless drills, a corded drill, lots of sawhorses, a power sander, an angle grinder, a 10×10 Coleman canopy tent, a Ford F-150 pickup truck, outdoor speakers and lots of iTunes playlists, plenty of water and cold beer. Why We Did This Our deck was relatively young – it was built in 2005. However, the pressure treated boards must not have been adequately maintained before we bought the house. I had powerwashed the deck every other year and had it stained a few times. The boards just rotted. We’re going to be in the house for a long time, and we wanted something that would look nice and require little maintenance. More bad deck boards The deck boards were in bad shape Things We Learned The two most important things: The hidden fasteners have to be put in JUST right. Wedge them into the grooved board, then bend down the bit that is screwed down. We didn’t do this on the first board and couldn’t get the second board to fit nearly close enough. Watching the official TREX YouTube video helped immensely, and we should have watched that first. When pre-drilling holes for the boards that need screwed down – DO NOT pre-drill through the underlying framing wood. ONLY pre-drill through the TREX itself. The screw won’t seat in the board properly. Instead of sitting down flush with the board, it will stop at the top of the board and just spin. I had to call the the place that sold me the screws to find this out. So about a third of our screws look like crap. If it doesn’t look or feel right – stop everything and pick up your computer or your phone. It’s not right, and it will be much easier to stop and find out why. We didn’t do this, and now I’m going to see every screw that’s not flush with the boards and get upset. Oh well. The Process How much time did it take? Well I spent about 8 hours taking the deck apart. And then the 3 of use spent 8 hours the first day, 10 hours the second day, 8 hours the third, and another 6 hours on the fourth day. That’s like 104 man-hours. We supposedly saved four or five thousand dollars in labor, but don’t do the math here or you might get a bit upset. The main thing is that we got what we wanted, and there won’t be any surprises later. Now for some pictures… This 6”+ pry bar made the destruction of the old deck much easier Most of the joists, once exposed, were OK. This joist wasn’t sitting on ANYTHING before. We think a lazy gas person cut the board to sneak a gas line in. Awesome… These monster lag bolts had to be accounted for when putting in the additional framing The border pattern Sheri wanted to put in required a lot more framing. These were the first boards to go down – we screwed them in as there was no way to attach clips I sat, kicked in the boards, and then drilled these clips in – but my wife was able to go MUCH faster by using her hands to lock the boards in and drill on her knees. I liked locking the board in with my feet when they needed to be ‘encouraged’ to go straight. The first board took FOREVER to go in, but then when we got rolling, we were able to put in a 20′ board in less than 10 minutes. This was end of construction day #2 – we got much further than we thought we would. Ah, the dreaded last 10% – what to do here? Remember those ‘floating’ stringers? Yeah, we fixed that up a bit, too. My wife used a website (and her brain) to calculate exactly how to cut the stringers to give us the rise/run we needed with the proper clearance and all that jazz. The stairs with stringers and toe kicks – this was worth the effort It started raining on us as I screwed down the steps – this we managed to get our shade tent up on the deck to protect us from the rain too The stairs, finished Finished, mostly Good corner shot The top of the stairs Stairs, looking down Celebratory beer In Summary There are a few things we’re not happy with. I think we can fix them up – but later. I have a few things left to finish, rewire the lighting, get the gas grille put back in, and rehang some screen doors. I was expecting this to be a lot worse than it was. If I didn’t have the help, I would have never done it myself. But I’m glad that I did have that help and did do that project. It’s not often you get to spend that kind of qualify time with family and building cool stuff.

    Read the article

  • Error while installation of CHMSee

    - by Anshuman Chakraborty
    I have recently migrated from Windows to Ubuntu. My current locale shows below output :- cha@COMPUTER:~$ locale LANG=en_IN LANGUAGE=en_IN:en LC_CTYPE="en_IN" LC_NUMERIC="en_IN" LC_TIME="en_IN" LC_COLLATE="en_IN" LC_MONETARY="en_IN" LC_MESSAGES="en_IN" LC_PAPER="en_IN" LC_NAME="en_IN" LC_ADDRESS="en_IN" LC_TELEPHONE="en_IN" LC_MEASUREMENT="en_IN" LC_IDENTIFICATION="en_IN" LC_ALL= When I am trying to install CHMSee (or any other Application) using UBUNTU Software Center. I am getting below error. installArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Selecting previously unselected package libchm1. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 207053 files and directories currently installed.) Unpacking libchm1 (from .../libchm1_2%3a0.40a-1_i386.deb) ... Selecting previously unselected package libjavascriptcoregtk-1.0-0. Unpacking libjavascriptcoregtk-1.0-0 (from .../libjavascriptcoregtk-1.0-0_1.8.0-0ubuntu2_i386.deb) ... Selecting previously unselected package libwebkitgtk-1.0-common. Unpacking libwebkitgtk-1.0-common (from .../libwebkitgtk-1.0-common_1.8.0-0ubuntu2_all.deb) ... Selecting previously unselected package libwebkitgtk-1.0-0. Unpacking libwebkitgtk-1.0-0 (from .../libwebkitgtk-1.0-0_1.8.0-0ubuntu2_i386.deb) ... Selecting previously unselected package chmsee. Unpacking chmsee (from .../chmsee_1.3.0-2ubuntu2_i386.deb) ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Processing triggers for man-db ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Setting up qmail (1.06-4) ... The hostname -f command returned: $1 Your system needs to have a fully qualified domain name (fqdn) in order to install the var-qmail packages. Installation aborted. dpkg: error processing qmail (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of qmail-run: qmail-run depends on qmail (>= 1.06-2.1); however: Package qmail is not configured yet. dpkg: error processing qmail-run (--configure): dependency problems - leaving unconfigured Setting up libchm1 (2:0.40a-1) ... No apport report written because the error message indicates its a followup error from a previous failure. Setting up libjavascriptcoregtk-1.0-0 (1.8.0-0ubuntu2) ... Setting up libwebkitgtk-1.0-common (1.8.0-0ubuntu2) ... Setting up libwebkitgtk-1.0-0 (1.8.0-0ubuntu2) ... Setting up chmsee (1.3.0-2ubuntu2) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: qmail qmail-run Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up qmail (1.06-4) ... The hostname -f command returned: $1 Your system needs to have a fully qualified domain name (fqdn) in order to install the var-qmail packages. Installation aborted. dpkg: error processing qmail (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of qmail-run: qmail-run depends on qmail (>= 1.06-2.1); however: Package qmail is not configured yet. dpkg: error processing qmail-run (--configure): dependency problems - leaving unconfigured Can someone please help me in resolving this issue. The elaboration would be most appreciated since I am very new to this. Thanks, Anshuman Chakraborty

    Read the article

  • What's up with LDoms: Part 5 - A few Words about Consoles

    - by Stefan Hinker
    Back again to look at a detail of LDom configuration that is often forgotten - the virtual console server. Remember, LDoms are SPARC systems.  As such, each guest will have it's own OBP running.  And to connect to that OBP, the administrator will need a console connection.  Since it's OBP, and not some x86 BIOS, this console will be very serial in nature ;-)  It's really very much like in the good old days, where we had a terminal concentrator where all those serial cables ended up in.  Just like with other components in LDoms, the virtualized solution looks very similar. Every LDom guest requires exactly one console connection.  Envision this similar to the RS-232 port on older SPARC systems.  The LDom framework provides one or more console services that provide access to these connections.  This would be the virtual equivalent of a network terminal server (NTS), where all those serial cables are plugged in.  In the physical world, we'd have a list somewhere, that would tell us which TCP-Port of the NTS was connected to which server.  "ldm list" does just that: root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 16 7680M 0.4% 27d 8h 22m jupiter bound ------ 5002 20 8G mars active -n---- 5000 2 8G 0.5% 55d 14h 10m venus active -n---- 5001 2 8G 0.5% 56d 40m pluto inactive ------ 4 4G The column marked "CONS" tells us, where to reach the console of each domain. In the case of the primary domain, this is actually a (more) physical connection - it's the console connection of the physical system, which is either reachable via the ILOM of that system, or directly via the serial console port on the chassis. All the other guests are reachable through the console service which we created during the inital setup of the system.  Note that pluto does not have a port assigned.  This is because pluto is not yet bound.  (Binding can be viewed very much as the assembly of computer parts - CPU, Memory, disks, network adapters and a serial console cable are all put together when binding the domain.)  Unless we set the port number explicitly, LDoms Manager will do this on a first come, first serve basis.  For just a few domains, this is fine.  For larger deployments, it might be a good idea to assign these port numbers manually using the "ldm set-vcons" command.  However, there is even better magic associated with virtual consoles. You can group several domains into one console group, reachable through one TCP port of the console service.  This can be useful when several groups of administrators are to be given access to different domains, or for other grouping reasons.  Here's an example: root@sun # ldm set-vcons group=planets service=console jupiter root@sun # ldm set-vcons group=planets service=console pluto root@sun # ldm bind jupiter root@sun # ldm bind pluto root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 16 7680M 6.1% 27d 8h 24m jupiter bound ------ 5002 200 8G mars active -n---- 5000 2 8G 0.6% 55d 14h 12m pluto bound ------ 5002 4 4G venus active -n---- 5001 2 8G 0.5% 56d 42m root@sun # telnet localhost 5002 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. sun-vnts-planets: h, l, c{id}, n{name}, q:l DOMAIN ID DOMAIN NAME DOMAIN STATE 2 jupiter online 3 pluto online sun-vnts-planets: h, l, c{id}, n{name}, q:npluto Connecting to console "pluto" in group "planets" .... Press ~? for control options .. What I did here was add the two domains pluto and jupiter to a new console group called "planets" on the service "console" running in the primary domain.  Simply using a group name will create such a group, if it doesn't already exist.  By default, each domain has its own group, using the domain name as the group name.  The group will be available on port 5002, chosen by LDoms Manager because I didn't specify it.  If I connect to that console group, I will now first be prompted to choose the domain I want to connect to from a little menu. Finally, here's an example how to assign port numbers explicitly: root@sun # ldm set-vcons port=5044 group=pluto service=console pluto root@sun # ldm bind pluto root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 16 7680M 3.8% 27d 8h 54m jupiter active -t---- 5002 200 8G 0.5% 30m mars active -n---- 5000 2 8G 0.6% 55d 14h 43m pluto bound ------ 5044 4 4G venus active -n---- 5001 2 8G 0.4% 56d 1h 13m With this, pluto would always be reachable on port 5044 in its own exclusive console group, no matter in which order other domains are bound. Now, you might be wondering why we always have to mention the console service name, "console" in all the examples here.  The simple answer is because there could be more than one such console service.  For all "normal" use, a single console service is absolutely sufficient.  But the system is flexible enough to allow more than that single one, should you need them.  In fact, you could even configure such a console service on a domain other than the primary (or control domain), which would make that domain a real console server.  I actually have a customer who does just that - they want to separate console access from the control domain functionality.  But this is definately a rather sophisticated setup. Something I don't want to go into in this post is access control.  vntsd, which is the daemon providing all these console services, is fully RBAC-aware, and you can configure authorizations for individual users to connect to console groups or individual domain's consoles.  If you can't wait until I get around to security, check out the man page of vntsd. Further reading: The Admin Guide is rather reserved on this subject.  I do recommend to check out the Reference Manual. The manpage for vntsd will discuss all the control sequences as well as the grouping and authorizations mentioned here.

    Read the article

  • Accessing your web server via IPv6

    Being able to run your systems on IPv6, have automatic address assignment and the ability to resolve host names are the necessary building blocks in your IPv6 network infrastructure. Now, that everything is in place it is about time that we are going to enable another service to respond to IPv6 requests. The following article will guide through the steps on how to enable Apache2 httpd to listen and respond to incoming IPv6 requests. This is the fourth article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. Surfing the web - IPv6 style Enabling IPv6 connections in Apache 2 is fairly simply. But first let's check whether your system has a running instance of Apache2 or not. You can check this like so: $ service apache2 status Apache2 is running (pid 2680). In case that you got a 'service unknown' you have to install Apache to proceed with the following steps: $ sudo apt-get install apache2 Out of the box, Apache binds to all your available network interfaces and listens to TCP port 80. To check this, run the following command: $ sudo netstat -lnptu | grep "apache2\W*$"tcp6       0      0 :::80                   :::*                    LISTEN      28306/apache2 In this case Apache2 is already binding to IPv6 (and implicitly to IPv4). If you only got a tcp output, then your HTTPd is not yet IPv6 enabled. Check your Listen directive, depending on your system this might be in a different location than the default in Ubuntu. $ sudo nano /etc/apache2/ports.conf # If you just change the port or add more ports here, you will likely also# have to change the VirtualHost statement in# /etc/apache2/sites-enabled/000-default# This is also true if you have upgraded from before 2.2.9-3 (i.e. from# Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and# README.Debian.gzNameVirtualHost *:80Listen 80<IfModule mod_ssl.c>    # If you add NameVirtualHost *:443 here, you will also have to change    # the VirtualHost statement in /etc/apache2/sites-available/default-ssl    # to <VirtualHost *:443>    # Server Name Indication for SSL named virtual hosts is currently not    # supported by MSIE on Windows XP.    Listen 443</IfModule><IfModule mod_gnutls.c>    Listen 443</IfModule> Just in case that you don't have a ports.conf file, look for it like so: $ cd /etc/apache2/$ fgrep -r -i 'listen' ./* And modify the related file instead of the ports.conf. Which most probably might be either apache2.conf or httpd.conf anyways. Okay, please bear in mind that Apache can only bind once on the same interface and port. So, eventually, you might be interested to add another port which explicitly listens to IPv6 only. In that case, you would add the following in your configuration file: Listen 80Listen [2001:db8:bad:a55::2]:8080 But this is completely optional... Anyways, just to complete all steps, you save the file, and then check the syntax like so: $ sudo apache2ctl configtestSyntax OK Ok, now let's apply the modifications to our running Apache2 instances: $ sudo service apache2 reload * Reloading web server config apache2   ...done. $ sudo netstat -lnptu | grep "apache2\W*$"                                                                                               tcp6       0      0 2001:db8:bad:a55:::8080 :::*                    LISTEN      5922/apache2    tcp6       0      0 :::80                   :::*                    LISTEN      5922/apache2 There we have two daemons running and listening to different TCP ports. Now, that the basics are in place, it's time to prepare any website to respond to incoming requests on the IPv6 address. Open up any configuration file you have below your sites-enabled folder. $ ls -al /etc/apache2/sites-enabled/... $ sudo nano /etc/apache2/sites-enabled/000-default <VirtualHost *:80 [2001:db8:bad:a55::2]:8080>        ServerAdmin [email protected]        ServerName server.ios.mu        ServerAlias server Here, we have to check and modify the VirtualHost directive and enable it to respond to the IPv6 address and port our web server is listening to. Save your changes, run the configuration test and reload Apache2 in order to apply your modifications. After successful steps you can launch your favourite browser and navigate to your IPv6 enabled web server. Accessing an IPv6 address in the browser That looks like a successful surgery to me... Note: In case that you received a timeout, check whether your client is operating on IPv6, too.

    Read the article

  • I'm a contract developer and I think I'm about to get screwed [closed]

    - by kagaku
    I do contract development on the side. You could say that I'm a contract developer? Considering I've only ever had one client I'd say that's not exactly the truth - more like I took a side job and needed some extra cash. It started out as a "rebuild our website and we'll pay you $10k" type project. Once that was complete (a bit over schedule, but certainly not over budget), the company hired me on as a "long term support" contractor. The contract is to go from March of this year, expiring on December 31st of this year - 10 months. Over which a payment is to be paid on the 30th of each month for a set amount. I've been fulfilling my end of the contract on all points - doing server maintenence, application and database changes, doing huge rush changes and pretty much just going above and beyond. Currently I'm in the middle of development of an iPhone mobile application (PhoneGap based) which is nearing completion (probably 3-4 weeks from submission). It has not been all peaches and flowers though. Each and every month when my paycheck comes due, there always seems to be an issue of sorts. These issues did not occur during the initial project, only during the support contract. The actual contract states that my check should be mailed out on the 30th of the month. I have received my check on time approximately once (on time being about 2-3 days within the 30th). I've received my paycheck as late as the 15th of the next month - over two weeks late. I've put up with it because I need the paycheck. There have been promises and promises of "we'll send it out on time next time! I promise" - only to receive it just as late the next month. When I ask about payment they give me a vibe like "why are you only worried about money?" - unfortunately I don't have the luxury of not worrying about money. The last straw was with my August payment, which should have been mailed on August 30th. I received it on September 12th. The reason for the delay? "USPS is delaying it man! we sent it out on the 1st!" is the reason I got. When I finally got the check in the mail, the postage on the envelope was marked September 10th - the date it was run through the postage machine. I've been outright lied to, at this point. I carry on working, because again - I need a paycheck. I orchestrated the move of our application to a new server, developed a bunch of new changes and continued work on the iPhone app. All told I probably went over my hourly allotment (I'm paid for 40 hours a month, I probably put in at least 50). On Saturday, the 1st, I gave the main contact at the company (a company of 3, by the way - this is not some big corporation) a ring and filled him in on the status of my work for the past two weeks. Unusually I hadn't heard from him since the middle of September. His response was "oh... well, that is nice and uh.. good job. well, we've been talking within the company about things and we've certainly got some decisions ahead of us..." - not verbatim but you get the idea (I hope?). I got out of this conversation that the site is not doing very well (which it's not) and they're considering pulling the plug. Crap, this contract is going to end early - there goes Christmas! Fine, that's alright, no problem. I'll get paid for the last months work and call it a day. Unfortunately I still haven't gotten last months check, and I'm getting dicked around now. "Oh.. we had problems transferring funds, we'll try and mail it out tomorrow" and "I left a VM with the finance guy, but I can't get ahold of him". So I'm getting the feeling I'm not getting paid for all the work I put in for September. This is obviously breach of contract, and I am pissed. Thinking irrationally, I considered changing all their passwords and holding their stuff hostage. Before I think it through (by the way, I am NOT going to do this, realized it would probably get me in trouble), I go and try some passwords for our various accounts. Google Apps? Oh, I'm no longer administrator here. Godaddy? Whoops, invalid password. Disqus? Nope, invalid password here too. Google Adsense / Analytics? Invalid password. Dedicated server account manager? Invalid password. Now, I have the servers root password - I just built the box last week and haven't had a chance to send the guy the root password. Wasn't in a rush, I manage the server and they never touch it. Now all of a sudden all the passwords except this one are changed; the writing is on the wall - I am out. Here's the conundrum. I have the root password, they do not. If I give them this password all the leverage I have is gone, out the door and out of my hands. During this argument of why am I not getting paid the guy sends me an email saying "oh by the way, what's the root username and password to the server?". Considering he knows absolutely nothing, I gave him an "admin" account which really has almost no rights. I still have exclusive access to the server, I just don't know where to go. I can hold their data hostage, but I'm almost positive this is the wrong thing to do. I'd consider it blackmail, regardless of whether or not I have gotten paid yet. I can "break" something on the server and give them the whole "well, if you were paying me I could fix it!" spiel. This works from a "well he's not holding their stuff hostage" point of view, but what stops them from hiring some one else to just fix the issue at hand? For all I know the guys nephew is a "l33t hax0r" and can figure it out for free. I can give in, document as much as I can and take him to small claims court. This is breach of contract, I'm not getting paid. I have a case, right? ???? Does anyone have any experience in this? What can I do? What are my options? I'm broke, I can't afford a lawyer and I can barely afford not getting this paycheck. My wife doesn't work (I work two jobs so she doesn't have to work - we have a 1 year old) and is already looking at getting a part time job to cover the bills. Long term we'll be fine, but this has pissed me off beyond belief! Help me out, I'm about to get screwed.

    Read the article

  • ndd on Solaris 10

    - by user12620111
    This is mostly a repost of LaoTsao's Weblog with some tweaks. Last time that I tried to cut & paste directly off of his page, some of the XML was messed up. I run this from my MacBook. It should also work from your windows laptop if you use cygwin. ================If not already present, create a ssh key on you laptop================ # ssh-keygen -t rsa ================ Enable passwordless ssh from my laptop. Need to type in the root password for the remote machines. Then, I no longer need to type in the password when I ssh or scp from my laptop to servers. ================ #!/usr/bin/env bash for server in `cat servers.txt` do   echo root@$server   cat ~/.ssh/id_rsa.pub | ssh root@$server "cat >> .ssh/authorized_keys" done ================ servers.txt ================ testhost1testhost2 ================ etc_system_addins ================ set rpcmod:clnt_max_conns=8 set zfs:zfs_arc_max=0x1000000000 set nfs:nfs3_bsize=131072 set nfs:nfs4_bsize=131072 ================ ndd-nettune.txt ================ #!/sbin/sh # # ident   "@(#)ndd-nettune.xml    1.0     01/08/06 SMI" . /lib/svc/share/smf_include.sh . /lib/svc/share/net_include.sh # Make sure that the libraries essential to this stage of booting  can be found. LD_LIBRARY_PATH=/lib; export LD_LIBRARY_PATH echo "Performing Directory Server Tuning..." >> /tmp/smf.out # # Standard SuperCluster Tunables # /usr/sbin/ndd -set /dev/tcp tcp_max_buf 2097152 /usr/sbin/ndd -set /dev/tcp tcp_xmit_hiwat 1048576 /usr/sbin/ndd -set /dev/tcp tcp_recv_hiwat 1048576 # Reset the library path now that we are past the critical stage unset LD_LIBRARY_PATH ================ ndd-nettune.xml ================ <?xml version="1.0"?> <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <!-- ident "@(#)ndd-nettune.xml 1.0 04/09/21 SMI" --> <service_bundle type='manifest' name='SUNWcsr:ndd'>   <service name='network/ndd-nettune' type='service' version='1'>     <create_default_instance enabled='true' />     <single_instance />     <dependency name='fs-minimal' type='service' grouping='require_all' restart_on='none'>       <service_fmri value='svc:/system/filesystem/minimal' />     </dependency>     <dependency name='loopback-network' grouping='require_any' restart_on='none' type='service'>       <service_fmri value='svc:/network/loopback' />     </dependency>     <dependency name='physical-network' grouping='optional_all' restart_on='none' type='service'>       <service_fmri value='svc:/network/physical' />     </dependency>     <exec_method type='method' name='start' exec='/lib/svc/method/ndd-nettune' timeout_seconds='3' > </exec_method>     <exec_method type='method' name='stop'  exec=':true'                       timeout_seconds='3' > </exec_method>     <property_group name='startd' type='framework'>       <propval name='duration' type='astring' value='transient' />     </property_group>     <stability value='Unstable' />     <template>       <common_name>     <loctext xml:lang='C'> ndd network tuning </loctext>       </common_name>       <documentation>     <manpage title='ndd' section='1M' manpath='/usr/share/man' />       </documentation>     </template>   </service> </service_bundle> ================ system_tuning.sh ================ #!/usr/bin/env bash for server in `cat servers.txt` do   cat etc_system_addins | ssh root@$server "cat >> /etc/system"   scp ndd-nettune.xml root@${server}:/var/svc/manifest/site/ndd-nettune.xml   scp ndd-nettune.txt root@${server}:/lib/svc/method/ndd-nettune   ssh root@$server chmod +x /lib/svc/method/ndd-nettune   ssh root@$server svccfg validate /var/svc/manifest/site/ndd-nettune.xml   ssh root@$server svccfg import /var/svc/manifest/site/ndd-nettune.xml done

    Read the article

  • Postfix configuration - Uing virtual min but server is bouncing back my mail.

    - by brodiebrodie
    I have no experience in setting up postfix, and thought virtualmin minght do the legwork for me. Appears not. When I try to send mail to the domain (either [email protected] [email protected] or [email protected]) I get the following message returned This is the mail system at host dedq239.localdomain. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to <postmaster> If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system <[email protected]> (expanded from <[email protected]>): User unknown in virtual alias table Final-Recipient: rfc822; [email protected] Original-Recipient: rfc822;[email protected] Action: failed Status: 5.0.0 Diagnostic-Code: X-Postfix; User unknown in virtual alias table How can I diagnose the problem here? It seems that the mail gets to my server but the server fails to locally deliver the message to the correct user. (This is a guess, truthfully I have no idea what is happening). I have checked my virtual alias table and it seems to be set up correctly (I can post if this would be helpful). Can anyone give me a clue as to the next step? Thanks alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 html_directory = no local_recipient_maps = $virtual_mailbox_maps mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_recipient_restrictions = permit_mynetworks reject_unauth_destination smtpd_sasl_auth_enable = yes soft_bounce = no unknown_local_recipient_reject_code = 550 virtual_alias_maps = hash:/etc/postfix/virtual My mail log file (the last entry) Sep 30 15:13:47 dedq239 postfix/cleanup[7237]: 207C6B18158: message-id=<[email protected]> Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 207C6B18158: from=<[email protected]>, size=1805, nrcpt=1 (queue active) Sep 30 15:13:47 dedq239 postfix/error[7238]: 207C6B18158: to=<[email protected]>, orig_to=<[email protected]>, relay=none, delay=0.64, delays=0.61/0.01/0/0.02, dsn=5.0.0, status=bounced (User unknown in virtual alias table) Sep 30 15:13:47 dedq239 postfix/cleanup[7237]: 8DC13B18169: message-id=<[email protected]> Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 8DC13B18169: from=<>, size=3691, nrcpt=1 (queue active) Sep 30 15:13:47 dedq239 postfix/bounce[7239]: 207C6B18158: sender non-delivery notification: 8DC13B18169 Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 207C6B18158: removed Sep 30 15:13:48 dedq239 postfix/smtp[7240]: 8DC13B18169: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[209.85.216.55]:25, delay=1.3, delays=0.02/0.01/0.58/0.75, dsn=2.0.0, status=sent (250 2.0.0 OK 1254348828 36si15082901pxi.91) Sep 30 15:13:48 dedq239 postfix/qmgr[7177]: 8DC13B18169: removed Sep 30 15:14:17 dedq239 postfix/smtpd[7233]: disconnect from mail-bw0-f228.google.com[209.85.218.228] etc.aliases file below I have not touched this file - myvirtualdomain is a replacement for my real domain name # Aliases in this file will NOT be expanded in the header from # Mail, but WILL be visible over networks or from /bin/mail. # # >>>>>>>>>> The program "newaliases" must be run after # >> NOTE >> this file is updated for any changes to # >>>>>>>>>> show through to sendmail. # # Basic system aliases -- these MUST be present. mailer-daemon: postmaster postmaster: root # General redirections for pseudo accounts. bin: root daemon: root adm: root lp: root sync: root shutdown: root halt: root mail: root news: root uucp: root operator: root games: root gopher: root ftp: root nobody: root radiusd: root nut: root dbus: root vcsa: root canna: root wnn: root rpm: root nscd: root pcap: root apache: root webalizer: root dovecot: root fax: root quagga: root radvd: root pvm: root amanda: root privoxy: root ident: root named: root xfs: root gdm: root mailnull: root postgres: root sshd: root smmsp: root postfix: root netdump: root ldap: root squid: root ntp: root mysql: root desktop: root rpcuser: root rpc: root nfsnobody: root ingres: root system: root toor: root manager: root dumper: root abuse: root newsadm: news newsadmin: news usenet: news ftpadm: ftp ftpadmin: ftp ftp-adm: ftp ftp-admin: ftp www: webmaster webmaster: root noc: root security: root hostmaster: root info: postmaster marketing: postmaster sales: postmaster support: postmaster # trap decode to catch security attacks decode: root # Person who should get root's mail #root: marc abuse-myvirtualdomain.com: [email protected] My etc/postfix/virtual file is below - again myvirtualdomain is a replacement. I think this file was generated by Virtualmin and I have tried messing around with is with no success... This is the version without my changes. myunixusername@myvirtualdomain .com myunixusername myvirtualdomain .com myvirtualdomain.com [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]

    Read the article

  • Secondary DHCP server won't start on Centos 6.2

    - by Slowjoe
    I'm trying to create a backup DHCP server. Server times are in sync. Primary server starts fine. Secondary server won't start. Error from /var/log/messages is: Sep 15 14:47:45 stream dhcpd: Copyright 2004-2010 Internet Systems Consortium. Sep 15 14:47:45 stream dhcpd: All rights reserved. Sep 15 14:47:45 stream dhcpd: For info, please visit https://www.isc.org/software/dhcp/ Sep 15 14:47:45 stream dhcpd: /etc/dhcp/dhcpd.conf line 25: invalid statement in peer declaration Sep 15 14:47:45 stream dhcpd: #011max-response-default Sep 15 14:47:45 stream dhcpd: ^ Sep 15 14:47:45 stream dhcpd: /etc/dhcp/dhcpd.conf line 41: failover peer dhcp-failover: not found Sep 15 14:47:45 stream dhcpd: failover peer "dhcp-failover" Sep 15 14:47:45 stream dhcpd: ^ Sep 15 14:47:45 stream dhcpd: /etc/dhcp/dhcpd.conf line 49: failover peer dhcp-failover: not found Sep 15 14:47:45 stream dhcpd: failover peer "dhcp-failover" Sep 15 14:47:45 stream dhcpd: ^ Sep 15 14:47:45 stream dhcpd: WARNING: Host declarations are global. They are not limited to the scope you declared them in. Sep 15 14:47:45 stream dhcpd: /etc/dhcp/dhcpd.conf line 70: failover peer dhcp-failover: not found Sep 15 14:47:45 stream dhcpd: failover peer "dhcp-failover" Sep 15 14:47:45 stream dhcpd: ^ Sep 15 14:47:45 stream dhcpd: /etc/dhcp/dhcpd.conf line 78: failover peer dhcp-failover: not found Sep 15 14:47:45 stream dhcpd: failover peer "dhcp-failover" Sep 15 14:47:45 stream dhcpd: ^ Sep 15 14:47:45 stream dhcpd: Configuration file errors encountered -- exiting Sep 15 14:47:45 stream dhcpd: Sep 15 14:47:45 stream dhcpd: This version of ISC DHCP is based on the release available Sep 15 14:47:45 stream dhcpd: on ftp.isc.org. Features have been added and other changes Sep 15 14:47:45 stream dhcpd: have been made to the base software release in order to make Sep 15 14:47:45 stream dhcpd: it work better with this distribution. Sep 15 14:47:45 stream dhcpd: Sep 15 14:47:45 stream dhcpd: Please report for this software via the CentOS Bugs Database: Sep 15 14:47:45 stream dhcpd: http://bugs.centos.org/ Sep 15 14:47:45 stream dhcpd: Sep 15 14:47:45 stream dhcpd: exiting. Config file contents: # DHCP Server Configuration file. # see /usr/share/doc/dhcp*/dhcpd.conf.sample # see 'man 5 dhcpd.conf' # option domain-name "eng.foo.com"; option domain-name-servers ns0.eng.foo.com, ns1.eng.foo.com; option ntp-servers ntp.eng.foo.com; #option time-servers ntp.eng.foo.com; default-lease-time 3600; max-lease-time 7200; authoritative; log-facility local7; failover peer "dhcp-failover" { secondary; address 10.0.1.70; port 647; peer address 10.0.1.11; peer port 647; max-response-default 30; max-unacked-updates 10; load balance max seconds 3; } # # Management subnet # subnet 10.0.0.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option broadcast-address 10.0.0.255; option routers 10.0.0.1; option domain-search "eng.foo.com", "foo.com"; # Unknown clients get this pool pool { failover peer "dhcp-failover"; max-lease-time 300; range 10.0.0.240 10.0.0.249; allow unknown-clients; } # Known clients get this pool pool { failover peer "dhcp-failover"; max-lease-time 28800; range 10.0.0.150 10.0.0.199; deny unknown-clients; } include "/etc/dhcp/dhcpd.conf-engmgmt"; } # # Data subnet # subnet 10.0.1.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option broadcast-address 10.0.1.255; option routers 10.0.1.1; option domain-search "eng.foo.com", "foo.com"; # Unknown clients get this pool pool { failover peer "dhcp-failover"; max-lease-time 300; range 10.0.1.240 10.0.1.249; allow unknown-clients; } # Known clients get this pool pool { failover peer "dhcp-failover"; max-lease-time 28800; range 10.0.1.150 10.0.1.199; deny unknown-clients; } # For centos network installs if substring (option vendor-class-identifier, 0, 8) = "anaconda" { filename "/autohome/distro/ks/"; next-server eng-data.eng.foo.com; } # For PXE network installs if substring (option vendor-class-identifier, 0, 9) = "PXEClient" { filename "pxelinux.0"; next-server eng-data.eng.foo.com; } # For KVM PXE network installs if substring (option vendor-class-identifier, 0, 9) = "Etherboot" { filename "pxelinux.0"; next-server eng-data.eng.foo.com; } include "/etc/dhcp/dhcpd.conf-engdata"; }

    Read the article

  • postfix 5.7.1 Relay access denied when sending mail with cron

    - by zensys
    Reluctant to ask because there is so much here about 'postfix relay access denied' but I cannot find my case: I use php (Zend Framework) to send emails outside my network using the Google mail server because I could not send mail outside my server (user: web). However when I sent out an email via cron (user: root, I believe), still using ZF, using the same mail config/credentials, I get the message: '5.7.1 Relay access denied' I guess I need to know one of two things: 1. How can I use the google smtp server from cron 2. What do I need to change in my config to send mail using my own server instead of google Though the answer to 2. is the more structural solution I assume, I am quite happy with an answer to 1. as well because I think Google is better at server maintaince (security/spam) than I am. Below my ZF application.ini mail section, main.cf and master.cf: application.ini: resources.mail.transport.type = smtp resources.mail.transport.auth = login resources.mail.transport.host = "smtp.gmail.com" resources.mail.transport.ssl = tls resources.mail.transport.port = 587 resources.mail.transport.username = [email protected] resources.mail.transport.password = xxxxxxx resources.mail.defaultFrom.email = [email protected] resources.mail.defaultFrom.name = "my company" main.cf: # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = /usr/share/doc/postfix # TLS parameters smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = mail.second-start.nl mydomain = second-start.nl alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all html_directory = /usr/share/doc/postfix/html message_size_limit = 30720000 virtual_alias_domains = virtual_alias_maps = proxy:mysql:/etc/postfix/mysql-virtual_forwardings.cf, mysql:/etc/postfix/mysql-virtual_email2email.cf virtual_mailbox_domains = proxy:mysql:/etc/postfix/mysql-virtual_domains.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailboxes.cf virtual_mailbox_base = /home/vmail virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 smtpd_sasl_auth_enable = yes broken_sasl_auth_clients = yes smtpd_sasl_authenticated_header = yes # see under Spam smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps virtual_transport = dovecot dovecot_destination_recipient_limit = 1 # Spam disable_vrfy_command = yes smtpd_delay_reject = yes smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, check_helo_access hash:/etc/postfix/helo_access, reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_restrictions = permit_sasl_authenticated, reject_unauth_destination, reject_invalid_hostname, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unknown_sender_domain, reject_unknown_recipient_domain, permit_mynetworks, reject_non_fqdn_hostname, reject_rbl_client sbl.spamhaus.org, reject_rbl_client zen.spamhaus.org, reject_rbl_client cbl.abuseat.org, reject_rbl_client bl.spamcop.net, permit smtpd_error_sleep_time = 1s smtpd_soft_error_limit = 10 smtpd_hard_error_limit = 20 master.cf: # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd #smtp inet n - - - 1 postscreen #smtpd pass - - - - - smtpd #dnsblog unix - - - - 0 dnsblog #tlsproxy unix - - - - 0 tlsproxy #submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # # Many of the following services use the Postfix pipe(8) delivery # agent. See the pipe(8) man page for information about ${recipient} # and other message envelope options. # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # Recent Cyrus versions can use the existing "lmtp" master.cf entry. # # Specify in cyrus.conf: # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 # # Specify in main.cf one or more of the following: # mailbox_transport = lmtp:inet:localhost # virtual_transport = lmtp:inet:localhost # # ==================================================================== # # Cyrus 2.1.5 (Amos Gouaux) # Also specify in main.cf: cyrus_destination_recipient_limit=1 # #cyrus unix - n n - - pipe # user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user} # # ==================================================================== # Old example of delivery via Cyrus. # #old-cyrus unix - n n - - pipe # flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient}

    Read the article

  • Can't connect to smtp (postfix, dovecot) after making a change and trying to change it back

    - by UberBrainChild
    I am using postfix and dovecot along with zpanel and I tried enabling SSL and then turned it off as I did not have SSL configured yet and I realized it was a bit stupid at the time. I am using CentOS 6.4. I get the following error in the mail log. (I changed my host name to "myhostname" and my domain to "mydomain.com") Oct 20 01:49:06 myhostname postfix/smtpd[4714]: connect from mydomain.com[127.0.0.1] Oct 20 01:49:16 myhostname postfix/smtpd[4714]: fatal: no SASL authentication mechanisms Oct 20 01:49:17 myhostname postfix/master[4708]: warning: process /usr/libexec/postfix/smtpd pid 4714 exit status 1 Oct 20 01:49:17 amyhostname postfix/master[4708]: warning: /usr/libexec/postfix/smtpd: bad command startup -- throttling Reading on forums and similar questions I figured it was just a service that was not running or installed. However I can see that saslauthd is currently up and running on my system and restarting it does not help. Here is my postfix master.cf # # Postfix master process configuration file. For details on the format # of the file, see the Postfix master(5) manual page. # # ***** Unused items removed ***** # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - n - - smtpd # -o content_filter=smtp-amavis:127.0.0.1:10024 # -o receive_override_options=no_address_mappings pickup fifo n - n 60 1 pickup submission inet n - - - - smtpd -o content_filter= -o receive_override_options=no_header_body_checks cleanup unix n - n - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - n 300 1 oqmgr tlsmgr unix - - n 1000? 1 tlsmgr rewrite unix - - n - - trivial-rewrite bounce unix - - n - 0 bounce defer unix - - n - 0 bounce trace unix - - n - 0 bounce verify unix - - n - 1 verify flush unix n - n 1000? 0 flush proxymap unix - - n - - proxymap smtp unix - - n - - smtp smtps inet n - - - - smtpd # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - n - - smtp -o fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - n - - showq error unix - - n - - error discard unix - - n - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - n - - lmtp anvil unix - - n - 1 anvil scache unix - - n - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # ==================================================================== maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/local/bin/maildrop -d ${recipient} uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=foo argv=/usr/local/sbin/bsmtp -f $sender $nexthop $recipient # # spam/virus section # smtp-amavis unix - - y - 2 smtp -o smtp_data_done_timeout=1200 -o disable_dns_lookups=yes -o smtp_send_xforward_command=yes 127.0.0.1:10025 inet n - y - - smtpd -o content_filter= -o smtpd_helo_restrictions= -o smtpd_sender_restrictions= -o smtpd_recipient_restrictions=permit_mynetworks,reject -o mynetworks=127.0.0.0/8 -o smtpd_error_sleep_time=0 -o smtpd_soft_error_limit=1001 -o smtpd_hard_error_limit=1000 -o receive_override_options=no_header_body_checks -o smtpd_bind_address=127.0.0.1 -o smtpd_helo_required=no -o smtpd_client_restrictions= -o smtpd_restriction_classes= -o disable_vrfy_command=no -o strict_rfc821_envelopes=yes # # Dovecot LDA dovecot unix - n n - - pipe flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d ${recipient} # # Vacation mail vacation unix - n n - - pipe flags=Rq user=vacation argv=/var/spool/vacation/vacation.pl -f ${sender} -- ${recipient} And here is dovecot ## ## Dovecot config file ## listen = * disable_plaintext_auth = no protocols = imap pop3 lmtp sieve auth_mechanisms = plain login passdb { driver = sql args = /etc/zpanel/configs/dovecot2/dovecot-mysql.conf } userdb { driver = sql } userdb { driver = sql args = /etc/zpanel/configs/dovecot2/dovecot-mysql.conf } mail_location = maildir:/var/zpanel/vmail/%d/%n first_valid_uid = 101 #last_valid_uid = 0 first_valid_gid = 12 #last_valid_gid = 0 #mail_plugins = mailbox_idle_check_interval = 30 secs maildir_copy_with_hardlinks = yes service imap-login { inet_listener imap { port = 143 } } service pop3-login { inet_listener pop3 { port = 110 } } service lmtp { unix_listener lmtp { #mode = 0666 } } service imap { vsz_limit = 256M } service pop3 { } service auth { unix_listener auth-userdb { mode = 0666 user = vmail group = mail } # Postfix smtp-auth unix_listener /var/spool/postfix/private/auth { mode = 0666 user = postfix group = postfix } } service auth-worker { } service dict { unix_listener dict { mode = 0666 user = vmail group = mail } } service managesieve-login { inet_listener sieve { port = 4190 } service_count = 1 process_min_avail = 0 vsz_limit = 64M } service managesieve { } lda_mailbox_autocreate = yes lda_mailbox_autosubscribe = yes protocol lda { mail_plugins = quota sieve postmaster_address = [email protected] } protocol imap { mail_plugins = quota imap_quota trash imap_client_workarounds = delay-newmail } lmtp_save_to_detail_mailbox = yes protocol lmtp { mail_plugins = quota sieve } protocol pop3 { mail_plugins = quota pop3_client_workarounds = outlook-no-nuls oe-ns-eoh } protocol sieve { managesieve_max_line_length = 65536 managesieve_implementation_string = Dovecot Pigeonhole managesieve_max_compile_errors = 5 } dict { quotadict = mysql:/etc/zpanel/configs/dovecot2/dovecot-dict-quota.conf } plugin { # quota = dict:User quota::proxy::quotadict quota = maildir:User quota acl = vfile:/etc/dovecot/acls trash = /etc/zpanel/configs/dovecot2/dovecot-trash.conf sieve_global_path = /var/zpanel/sieve/globalfilter.sieve sieve = ~/dovecot.sieve sieve_dir = ~/sieve sieve_global_dir = /var/zpanel/sieve/ #sieve_extensions = +notify +imapflags sieve_max_script_size = 1M #sieve_max_actions = 32 #sieve_max_redirects = 4 } log_path = /var/log/dovecot.log info_log_path = /var/log/dovecot-info.log debug_log_path = /var/log/dovecot-debug.log mail_debug=yes ssl = no Does anyone have any ideas or tips on what I can try to get this working? Thanks for all the help EDIT: Output of postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 delay_warning_time = 4 disable_vrfy_command = yes html_directory = no inet_interfaces = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = localhost.$mydomain, localhost mydomain = control.yourdomain.com myhostname = control.yourdomain.com mynetworks = all newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.2.2/README_FILES recipient_delimiter = + relay_domains = proxy:mysql:/etc/zpanel/configs/postfix/mysql-relay_domains_maps.cf sample_directory = /usr/share/doc/postfix-2.2.2/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_use_tls = no smtpd_client_restrictions = smtpd_data_restrictions = reject_unauth_pipelining smtpd_helo_required = yes smtpd_helo_restrictions = smtpd_recipient_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unknown_recipient_domain smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_sender_restrictions = smtpd_use_tls = no soft_bounce = yes transport_maps = hash:/etc/postfix/transport unknown_local_recipient_reject_code = 550 virtual_alias_maps = proxy:mysql:/etc/zpanel/configs/postfix/mysql-virtual_alias_maps.cf, regexp:/etc/zpanel/configs/postfix/virtual_regexp virtual_gid_maps = static:12 virtual_mailbox_base = /var/zpanel/vmail virtual_mailbox_domains = proxy:mysql:/etc/zpanel/configs/postfix/mysql-virtual_domains_maps.cf virtual_mailbox_maps = proxy:mysql:/etc/zpanel/configs/postfix/mysql-virtual_mailbox_maps.cf virtual_minimum_uid = 101 virtual_transport = dovecot virtual_uid_maps = static:101

    Read the article

  • I added some options to stop spam with Postfix, but now won't send email to remote domains

    - by willdanceforfun
    I had a working Postfix server, but added a few lines to my main.cf in a hope to block some common spam. Those lines I added were: smtpd_helo_required = yes smtpd_recipient_restrictions = reject_invalid_hostname, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination, reject_rbl_client multi.uribl.com, reject_rbl_client dsn.rfc-ignorant.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client list.dsbl.org, reject_rbl_client sbl-xbl.spamhaus.org, reject_rbl_client bl.spamcop.net, reject_rbl_client dnsbl.sorbs.net, reject_rbl_client cbl.abuseat.org, reject_rbl_client ix.dnsbl.manitu.net, reject_rbl_client combined.rbl.msrbl.net, reject_rbl_client rabl.nuclearelephant.com, permit It appears my postfix is now receiving normal emails fine, and blocking spam emails. But when I now try to use this server myself to send to a remote domain (an email not on my server) I get bounced, with maillog saying something like this: Nov 12 06:19:36 srv postfix/smtpd[11756]: NOQUEUE: reject: RCPT from unknown[xx.xx.x.xxx]: 450 4.1.2 <[email protected]>: Recipient address rejected: Domain not found; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<[192.168.1.100]> Is that saying 'domain not found' for gmail.com? Why is that recipient address rejected? An output of my postconf-n is: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailbox_size_limit = 0 mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = primarydomain.net myhostname = mail.primarydomain.net myorigin = $myhostname newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relay_domains = $mydestination, primarydomain.net, secondarydomain.org sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_client_restrictions = permit_sasl_authenticated smtpd_helo_required = yes smtpd_recipient_restrictions = reject_invalid_hostname, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination, reject_rbl_client multi.uribl.com, reject_rbl_client dsn.rfc-ignorant.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client list.dsbl.org, reject_rbl_client sbl-xbl.spamhaus.org, reject_rbl_client bl.spamcop.net, reject_rbl_client dnsbl.sorbs.net, reject_rbl_client cbl.abuseat.org, reject_rbl_client ix.dnsbl.manitu.net, reject_rbl_client combined.rbl.msrbl.net, reject_rbl_client rabl.nuclearelephant.com, permit smtpd_sasl_auth_enable = yes smtpd_sasl_path = private/auth smtpd_sasl_type = dovecot smtpd_sender_restrictions = reject_unknown_sender_domain soft_bounce = no unknown_local_recipient_reject_code = 550 virtual_alias_domains = mail.secondarydomain.org virtual_alias_maps = hash:/etc/postfix/virtual Any insight greatly appreciated. Edit: here is the dig mx gmail.com from the server: ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.4 <<>> mx gmail.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31766 ;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 4, ADDITIONAL: 14 ;; QUESTION SECTION: ;gmail.com. IN MX ;; ANSWER SECTION: gmail.com. 1207 IN MX 5 gmail-smtp-in.l.google.com. gmail.com. 1207 IN MX 30 alt3.gmail-smtp-in.l.google.com. gmail.com. 1207 IN MX 20 alt2.gmail-smtp-in.l.google.com. gmail.com. 1207 IN MX 40 alt4.gmail-smtp-in.l.google.com. gmail.com. 1207 IN MX 10 alt1.gmail-smtp-in.l.google.com. ;; AUTHORITY SECTION: gmail.com. 109168 IN NS ns1.google.com. gmail.com. 109168 IN NS ns4.google.com. gmail.com. 109168 IN NS ns3.google.com. gmail.com. 109168 IN NS ns2.google.com. ;; ADDITIONAL SECTION: alt1.gmail-smtp-in.l.google.com. 207 IN A 173.194.70.27 alt1.gmail-smtp-in.l.google.com. 248 IN AAAA 2a00:1450:4001:c02::1b gmail-smtp-in.l.google.com. 200 IN A 173.194.67.26 gmail-smtp-in.l.google.com. 248 IN AAAA 2a00:1450:400c:c05::1b alt3.gmail-smtp-in.l.google.com. 207 IN A 74.125.143.27 alt3.gmail-smtp-in.l.google.com. 249 IN AAAA 2a00:1450:400c:c05::1b alt2.gmail-smtp-in.l.google.com. 207 IN A 173.194.69.27 alt2.gmail-smtp-in.l.google.com. 248 IN AAAA 2a00:1450:4008:c01::1b alt4.gmail-smtp-in.l.google.com. 207 IN A 173.194.79.27 alt4.gmail-smtp-in.l.google.com. 249 IN AAAA 2607:f8b0:400e:c01::1a ns2.google.com. 281970 IN A 216.239.34.10 ns3.google.com. 281970 IN A 216.239.36.10 ns4.google.com. 281970 IN A 216.239.38.10 ns1.google.com. 281970 IN A 216.239.32.10

    Read the article

  • OpenVPN (HideMyAss) client on Ubuntu: Route only HTTP traffic

    - by Andersmith
    I want to use HideMyAss VPN (hidemyass.com) on Ubuntu Linux to route only HTTP (ports 80 & 443) traffic to the HideMyAss VPN server, and leave all the other traffic (MySQL, SSH, etc.) alone. I'm running Ubuntu on AWS EC2 instances. The problem is that when I try and run the default HMA script, I suddenly can't SSH into the Ubuntu instance anymore and have to reboot it from the AWS console. I suspect the Ubuntu instance will also have trouble connecting to the RDS MySQL database, but haven't confirmed it. HMA uses OpenVPN like this: sudo openvpn client.cfg The client configuration file (client.cfg) looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client auth-user-pass #management-query-passwords #management-hold # Disable management port for debugging port issues #management 127.0.0.1 13010 ping 5 ping-exit 30 # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. #;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. proto tcp ;proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. # All VPN Servers are added at the very end ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. # We order the hosts according to number of connections. # So no need to randomize the list # remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ./keys/ca.crt cert ./keys/hmauser.crt key ./keys/hmauser.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ;ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. #comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 # Detect proxy auto matically #auto-proxy # Need this for Vista connection issue route-metric 1 # Get rid of the cached password warning #auth-nocache #show-net-up #dhcp-renew #dhcp-release #route-delay 0 120 # added to prevent MITM attack ns-cert-type server # # Remote servers added dynamically by the master server # DO NOT CHANGE below this line # remote-random remote 173.242.116.200 443 # 0 remote 38.121.77.74 443 # 0 # etc... remote 67.23.177.5 443 # 0 remote 46.19.136.130 443 # 0 remote 173.254.207.2 443 # 0 # END

    Read the article

  • Clearing C#'s WebBrowser control's cookies for all sites WITHOUT clearing for IE itself

    - by Helgi Hrafn Gunnarsson
    Hail StackOverflow! The short version of what I'm trying to do is in the title. Here's the long version. I have a bit of a complex problem which I'm sure I will receive a lot of guesses as a response to. In order to keep the well-intended but unfortunately useless guesses to a minimum, let me first mention that the solution to this problem is not simple, so simple suggestions will unfortunately not help at all, even though I appreciate the effort. C#'s WebBrowser component is fundamentally IE itself so solutions with any sorts of caveats will almost certainly not work. I need to do exactly what I'm trying to do, and even a seemingly minor caveat will defeat the purpose completely. At the risk of sounding arrogant, I need assistance from someone who really has in-depth knowledge about C#'s WebBrowser and/or WinInet and/or how to communicate with Windows's underlying system from C#... or how to encapsulate C++ code in C#. That said, I don't expect anyone to do this for me, and I've found some promising hints which are explained later in this question. But first... what I'm trying to achieve is this. I have a Windows.Forms component which contains a WebBrowser control. This control needs to: Clear ALL cookies for ALL websites. Visit several websites, one after another, and record cookies and handle them correctly. This part works fine already so I don't have any problems with this. Rinse and repeat... theoretically forever. Now, here's the real problem. I need to clear all those cookies (for any and all sites), but only for the WebBrowser control itself and NOT the cookies which IE proper uses. What's fundamentally wrong with this approach is of course the fact that C#'s WebBrowser control is IE. But I'm a stubborn young man and I insist on it being possible, or else! ;) Here's where I'm stuck at the moment. It is quite simply impossible to clear all cookies for the WebBrowser control programmatically through C# alone. One must use DllImport and all the crazy stuff that comes with it. This chunk works fine for that purpose: [DllImport("wininet.dll", SetLastError = true)] private static extern bool InternetSetOption(IntPtr hInternet, int dwOption, IntPtr lpBuffer, int lpdwBufferLength); And then, in the function that actually does the clearing of the cookies: InternetSetOption(IntPtr.Zero, INTERNET_OPTION_END_BROWSER_SESSION, IntPtr.Zero, 0); Then all the cookies get cleared and as such, I'm happy. The program works exactly as intended, aside from the fact that it also clears IE's cookies, which must not be allowed to happen. The problem is that this also clears the cookies for IE proper, and I can't have that happen. From one fellow StackOverflower (if that's a word), Sheng Jiang proposed this to a different problem in a comment, but didn't elaborate further: "If you want to isolate your application's cookies you need to override the Cache directory registry setting via IDocHostUIHandler2::GetOverrideKeyPath" I've looked around the internet for IDocHostUIHandler2 and GetOverrideKeyPath, but I've got no idea of how to use them from C# to isolate cookies to my WebBrowser control. My experience with the Windows registry is limited to RegEdit (so I understand that it's a tree structure with different data types but that's about it... I have no in-depth knowledge of the registry's relationship with IE, for example). Here's what I dug up on MSDN: IDocHostUIHandler2 docs: http://msdn.microsoft.com/en-us/library/aa753275%28VS.85%29.aspx GetOverrideKeyPath docs: http://msdn.microsoft.com/en-us/library/aa753274%28VS.85%29.aspx I think I know roughly what these things do, I just don't know how to use them. So, I guess that's it! Any help is greatly appreciated.

    Read the article

  • protobuf-net: incorrect wire-type exception deserializing Guid properties

    - by Paul Smith
    I'm having issues deserializing certain Guid properties of ORM-generated entities using protobuf-net. Here's a simplified example of the code (reproduces most elements of the scenario, but doesn't reproduce the behavior; I can't expose our internal entities, so I'm looking for clues to account for the exception). Say I have a class, Account with an AccountID read-only guid, and an AccountName read-write string. I serialize & immediately deserialize a clone. Deserializing throws an Incorrect wire-type deserializing Guid exception while deserializing. Here's example usage... Account acct = new Account() { AccountName = "Bob's Checking" }; Debug.WriteLine(acct.AccountID.ToString()); using (MemoryStream ms = new MemoryStream()) { ProtoBuf.Serializer.Serialize<Account>(ms, acct); Debug.WriteLine(Encoding.UTF8.GetString(ms.GetBuffer())); ms.Position = 0; Account clone = ProtoBuf.Serializer.Deserialize<Account>(ms); Debug.WriteLine(clone.AccountID.ToString()); } And here's an example ORM'd class (simplified, but demonstrates the relevant semantics I can think of). Uses a shell game to deserialize read-only properties by exposing the backing field ("can't write" essentially becomes "shouldn't write," but we can scan code for instances of assigning to these fields, so the hack works for our purposes). Again, this does not reproduce the exception behavior; I'm looking for clues as to what could: [DataContract()] [Serializable()] public partial class Account { public Account() { _accountID = Guid.NewGuid(); } [XmlAttribute("AccountID")] [DataMember(Name = "AccountID", Order = 1)] public Guid _accountID; /// <summary> /// A read-only property; XML, JSON and DataContract serializers all seem /// to correctly recognize the public backing field when deserializing: /// </summary> [IgnoreDataMember] [XmlIgnore] public Guid AccountID { get { return this._accountID; } } [IgnoreDataMember] protected string _accountName; [DataMember(Name = "AccountName", Order = 2)] [XmlAttribute] public string AccountName { get { return this._accountName; } set { this._accountName = value; } } } XML, JSON and DataContract serializers all seem to serialize / deserialize these object graphs just fine, so the attribute arrangement basically works. I've tried protobuf-net with lists vs. single instances, different prefix styles, etc., but still always get the 'incorrect wire-type ... Guid' exception when deserializing. So the specific questions is, is there any known explanation / workaround for this? I'm at a loss trying to trace what circumstances (in the real code but not the example) could be causing it. We hope not to have to create a protobuf dependency directly in the entity layer; if that's the case, we'll probably create proxy DTO entities with all public properties having protobuf attributes. (This is a subjective issue I have with all declarative serialization models; it's a ubiquitous pattern & I understand why it arose, but IMO, if we can put a man on the moon, then "normal" should be to have objects and serialization contracts decoupled. ;-) ) Thanks!

    Read the article

  • Best practices for using the Entity Framework with WPF DataBinding

    - by Ken Smith
    I'm in the process of building my first real WPF application (i.e., the first intended to be used by someone besides me), and I'm still wrapping my head around the best way to do things in WPF. It's a fairly simple data access application using the still-fairly-new Entity Framework, but I haven't been able to find a lot of guidance online for the best way to use these two technologies (WPF and EF) together. So I thought I'd toss out how I'm approaching it, and see if anyone has any better suggestions. I'm using the Entity Framework with SQL Server 2008. The EF strikes me as both much more complicated than it needs to be, and not yet mature, but Linq-to-SQL is apparently dead, so I might as well use the technology that MS seems to be focusing on. This is a simple application, so I haven't (yet) seen fit to build a separate data layer around it. When I want to get at data, I use fairly simple Linq-to-Entity queries, usually straight from my code-behind, e.g.: var families = from family in entities.Family.Include("Person") orderby family.PrimaryLastName, family.Tag select family; Linq-to-Entity queries return an IOrderedQueryable result, which doesn't automatically reflect changes in the underlying data, e.g., if I add a new record via code to the entity data model, the existence of this new record is not automatically reflected in the various controls referencing the Linq query. Consequently, I'm throwing the results of these queries into an ObservableCollection, to capture underlying data changes: familyOC = new ObservableCollection<Family>(families.ToList()); I then map the ObservableCollection to a CollectionViewSource, so that I can get filtering, sorting, etc., without having to return to the database. familyCVS.Source = familyOC; familyCVS.View.Filter = new Predicate<object>(ApplyFamilyFilter); familyCVS.View.SortDescriptions.Add(new System.ComponentModel.SortDescription("PrimaryLastName", System.ComponentModel.ListSortDirection.Ascending)); familyCVS.View.SortDescriptions.Add(new System.ComponentModel.SortDescription("Tag", System.ComponentModel.ListSortDirection.Ascending)); I then bind the various controls and what-not to that CollectionViewSource: <ListBox DockPanel.Dock="Bottom" Margin="5,5,5,5" Name="familyList" ItemsSource="{Binding Source={StaticResource familyCVS}, Path=., Mode=TwoWay}" IsSynchronizedWithCurrentItem="True" ItemTemplate="{StaticResource familyTemplate}" SelectionChanged="familyList_SelectionChanged" /> When I need to add or delete records/objects, I manually do so from both the entity data model, and the ObservableCollection: private void DeletePerson(Person person) { entities.DeleteObject(person); entities.SaveChanges(); personOC.Remove(person); } I'm generally using StackPanel and DockPanel controls to position elements. Sometimes I'll use a Grid, but it seems hard to maintain: if you want to add a new row to the top of your grid, you have to touch every control directly hosted by the grid to tell it to use a new line. Uggh. (Microsoft has never really seemed to get the DRY concept.) I almost never use the VS WPF designer to add, modify or position controls. The WPF designer that comes with VS is sort of vaguely helpful to see what your form is going to look like, but even then, well, not really, especially if you're using data templates that aren't binding to data that's available at design time. If I need to edit my XAML, I take it like a man and do it manually. Most of my real code is in C# rather than XAML. As I've mentioned elsewhere, entirely aside from the fact that I'm not yet used to "thinking" in it, XAML strikes me as a clunky, ugly language, that also happens to come with poor designer and intellisense support, and that can't be debugged. Uggh. Consequently, whenever I can see clearly how to do something in C# code-behind that I can't easily see how to do in XAML, I do it in C#, with no apologies. There's been plenty written about how it's a good practice to almost never use code-behind in WPF page (say, for event-handling), but so far at least, that makes no sense to me whatsoever. Why should I do something in an ugly, clunky language with god-awful syntax, an astonishingly bad editor, and virtually no type safety, when I can use a nice, clean language like C# that has a world-class editor, near-perfect intellisense, and unparalleled type safety? So that's where I'm at. Any suggestions? Am I missing any big parts of this? Anything that I should really think about doing differently?

    Read the article

  • Speeding up a search .net 4.0

    - by user231465
    Wondering if I can speed up the search. I need to build a functionality that has to be used by many UI screens The one I have got works but I need to make sure I am implementing a fast algoritim if you like It's like an incremental search. User types a word to search for eg const string searchFor = "Guinea"; const char nextLetter = ' ' It looks in the list and returns 2 records "Guinea and Guinea Bissau " User types a word to search for eg const string searchFor = "Gu"; const char nextLetter = 'i' returns 3 results. This is the function but I would like to speed it up. Is there a pattern for this kind of search? class Program { static void Main() { //Find all countries that begin with string + a possible letter added to it //const string searchFor = "Guinea"; //const char nextLetter = ' '; //returns 2 results const string searchFor = "Gu"; const char nextLetter = 'i'; List<string> result = FindPossibleMatches(searchFor, nextLetter); result.ForEach(x=>Console.WriteLine(x)); //returns 3 results Console.Read(); } /// <summary> /// Find all possible matches /// </summary> /// <param name="searchFor">string to search for</param> /// <param name="nextLetter">pretend user as just typed a letter</param> /// <returns></returns> public static List<string> FindPossibleMatches (string searchFor, char nextLetter) { var hashedCountryList = new HashSet<string>(CountriesList()); var result=new List<string>(); IEnumerable<string> tempCountryList = hashedCountryList.Where(x => x.StartsWith(searchFor)); foreach (string item in tempCountryList) { string tempSearchItem; if (nextLetter == ' ') { tempSearchItem = searchFor; } else { tempSearchItem = searchFor + nextLetter; } if(item.StartsWith(tempSearchItem)) { result.Add(item); } } return result; } /// <summary> /// Returns list of countries. /// </summary> public static string[] CountriesList() { return new[] { "Afghanistan", "Albania", "Algeria", "American Samoa", "Andorra", "Angola", "Anguilla", "Antarctica", "Antigua And Barbuda", "Argentina", "Armenia", "Aruba", "Australia", "Austria", "Azerbaijan", "Bahamas", "Bahrain", "Bangladesh", "Barbados", "Belarus", "Belgium", "Belize", "Benin", "Bermuda", "Bhutan", "Bolivia", "Bosnia Hercegovina", "Botswana", "Bouvet Island", "Brazil", "Brunei Darussalam", "Bulgaria", "Burkina Faso", "Burundi", "Byelorussian SSR", "Cambodia", "Cameroon", "Canada", "Cape Verde", "Cayman Islands", "Central African Republic", "Chad", "Chile", "China", "Christmas Island", "Cocos (Keeling) Islands", "Colombia", "Comoros", "Congo", "Cook Islands", "Costa Rica", "Cote D'Ivoire", "Croatia", "Cuba", "Cyprus", "Czech Republic", "Czechoslovakia", "Denmark", "Djibouti", "Dominica", "Dominican Republic", "East Timor", "Ecuador", "Egypt", "El Salvador", "England", "Equatorial Guinea", "Eritrea", "Estonia", "Ethiopia", "Falkland Islands", "Faroe Islands", "Fiji", "Finland", "France", "Gabon", "Gambia", "Georgia", "Germany", "Ghana", "Gibraltar", "Great Britain", "Greece", "Greenland", "Grenada", "Guadeloupe", "Guam", "Guatemela", "Guernsey", "Guiana", "Guinea", "Guinea Bissau", "Guyana", "Haiti", "Heard Islands", "Honduras", "Hong Kong", "Hungary", "Iceland", "India", "Indonesia", "Iran", "Iraq", "Ireland", "Isle Of Man", "Israel", "Italy", "Jamaica", "Japan", "Jersey", "Jordan", "Kazakhstan", "Kenya", "Kiribati", "Korea, South", "Korea, North", "Kuwait", "Kyrgyzstan", "Lao People's Dem. Rep.", "Latvia", "Lebanon", "Lesotho", "Liberia", "Libya", "Liechtenstein", "Lithuania", "Luxembourg", "Macau", "Macedonia", "Madagascar", "Malawi", "Malaysia", "Maldives", "Mali", "Malta", "Mariana Islands", "Marshall Islands", "Martinique", "Mauritania", "Mauritius", "Mayotte", "Mexico", "Micronesia", "Moldova", "Monaco", "Mongolia", "Montserrat", "Morocco", "Mozambique", "Myanmar", "Namibia", "Nauru", "Nepal", "Netherlands", "Netherlands Antilles", "Neutral Zone", "New Caledonia", "New Zealand", "Nicaragua", "Niger", "Nigeria", "Niue", "Norfolk Island", "Northern Ireland", "Norway", "Oman", "Pakistan", "Palau", "Panama", "Papua New Guinea", "Paraguay", "Peru", "Philippines", "Pitcairn", "Poland", "Polynesia", "Portugal", "Puerto Rico", "Qatar", "Reunion", "Romania", "Russian Federation", "Rwanda", "Saint Helena", "Saint Kitts", "Saint Lucia", "Saint Pierre", "Saint Vincent", "Samoa", "San Marino", "Sao Tome and Principe", "Saudi Arabia", "Scotland", "Senegal", "Seychelles", "Sierra Leone", "Singapore", "Slovakia", "Slovenia", "Solomon Islands", "Somalia", "South Africa", "South Georgia", "Spain", "Sri Lanka", "Sudan", "Suriname", "Svalbard", "Swaziland", "Sweden", "Switzerland", "Syrian Arab Republic", "Taiwan", "Tajikista", "Tanzania", "Thailand", "Togo", "Tokelau", "Tonga", "Trinidad and Tobago", "Tunisia", "Turkey", "Turkmenistan", "Turks and Caicos Islands", "Tuvalu", "Uganda", "Ukraine", "United Arab Emirates", "United Kingdom", "United States", "Uruguay", "Uzbekistan", "Vanuatu", "Vatican City State", "Venezuela", "Vietnam", "Virgin Islands", "Wales", "Western Sahara", "Yemen", "Yugoslavia", "Zaire", "Zambia", "Zimbabwe" }; } } } Any suggestions? Thanks

    Read the article

  • .NET 4.0 Dynamic object used statically?

    - by Kevin Won
    I've gotten quite sick of XML configuration files in .NET and want to replace them with a format that is more sane. Therefore, I'm writing a config file parser for C# applications that will take a custom config file format, parse it, and create a Python source string that I can then execute in C# and use as a static object (yes that's right--I want a static (not the static type dyanamic) object in the end). Here's an example of what my config file looks like: // my custom config file format GlobalName: ExampleApp Properties { ExternalServiceTimeout: "120" } Python { // this allows for straight python code to be added to handle custom config def MyCustomPython: return "cool" } Using ANTLR I've created a Lexer/Parser that will convert this format to a Python script. So assume I have that all right and can take the .config above and run my Lexer/Parser on it to get a Python script out the back (this has the added benefit of giving me a validation tool for my config). By running the resultant script in C# // simplified example of getting the dynamic python object in C# // (not how I really do it) ScriptRuntime py = Python.CreateRuntime(); dynamic conf = py.UseFile("conftest.py"); dynamic t = conf.GetConfTest("test"); I can get a dynamic object that has my configuration settings. I can now get my config file settings in C# by invoking a dynamic method on that object: //C# calling a method on the dynamic python object var timeout = t.GetProperty("ExternalServiceTimeout"); //the config also allows for straight Python scripting (via the Python block) var special = t.MyCustonPython(); of course, I have no type safety here and no intellisense support. I have a dynamic representation of my config file, but I want a static one. I know what my Python object's type is--it is actually newing up in instance of a C# class. But since it's happening in python, it's type is not the C# type, but dynamic instead. What I want to do is then cast the object back to the C# type that I know the object is: // doesn't work--can't cast a dynamic to a static type (nulls out) IConfigSettings staticTypeConfig = t as IConfigSettings Is there any way to figure out how to cast the object to the static type? I'm rather doubtful that there is... so doubtful that I took another approach of which I'm not entirely sure about. I'm wondering if someone has a better way... So here's my current tactic: since I know the type of the python object, I am creating a C# wrapper class: public class ConfigSettings : IConfigSettings that takes in a dynamic object in the ctor: public ConfigSettings(dynamic settings) { this.DynamicProxy = settings; } public dynamic DynamicProxy { get; private set; } Now I have a reference to the Python dynamic object of which I know the type. So I can then just put wrappers around the Python methods that I know are there: // wrapper access to the underlying dynamic object // this makes my dynamic object appear 'static' public string GetSetting(string key) { return this.DynamicProxy.GetProperty(key).ToString(); } Now the dynamic object is accessed through this static proxy and thus can obviously be passed around in the static C# world via interface, etc: // dependency inject the dynamic object around IBusinessLogic logic = new BusinessLogic(IConfigSettings config); This solution has the benefits of all the static typing stuff we know and love while at the same time giving me the option of 'bailing out' to dynamic too: // the DynamicProxy property give direct access to the dynamic object var result = config.DynamicProxy.MyCustomPython(); but, man, this seems rather convoluted way of getting to an object that is a static type in the first place! Since the whole dynamic/static interaction world is new to me, I'm really questioning if my solution is optimal or if I'm missing something (i.e. some way of casting that dynamic object to a known static type) about how to bridge the chasm between these two universes.

    Read the article

  • Most simple way to do holiday calculation?

    - by brainfrog
    I want to make a little free calendar program to help me and others calculate how much time we have got left in a project. I mean real working time, not just time. Time in a raw form is not saying much. Typically when my boss tells me that I have time until 05-05-2011 it doesn't tell me really how much time I have to do my job. You know...so many things stop me from work: A) beeing at home, not at work (so called "free time" or "spare time"). That is in my case I work exactly 8 hours a day and then the cleaning ladies throw me out of the office with their incredible loud industrial vacuum cleaners every evening (my boss accepts that as an excuse to go home in time, regularly). B) weekends, or more precisely saturdays and sundays C) official holiday rescuing me from having to go to work. what I want to do is make a little utility which tells me how many working hours I really have in a given time period. The first two things A and B are pretty easy to implement. But the last thing C scares my pants off. Holidays. OOOHHH man. You know what that means. Chaos. Pure chaos. The huge question is: HOW TO CALCULATE HOLIDAYS?! Since I want my program to be useful for anyone anywhere in the world, I can't just hardcode all holidays for my little town. So which options do I have? I) I could hand-craft downloadable lists of holidays. Users search them within the application and download them from an webserver. Or I ship all of them in the package. But I would get very, very old if I tried that by myself for every country, state and town. II) I make an initial data sheet with holidays for my town, and don't care about the rest. However, I make that sheet with an how-to public, so that everyone who feels like beeing very nice can provide holiday data for his country / region / whatever. Those are made public on a webserver and everyone can get the data packages he/she needs for the app. III) ? I care a lot about usability. I don't want to make an ugly linux hack style hard to use app that only computer freaks can use. So you need to tell me more about holiday science. I was never really clever at this. I assume every single country in the world has it's own set of holidays. In every country there may be several states. For example the US has some, and Germany has also some states. Holidays vary from state to state. But I know from an good programmer he told me never assume anything. So the questions about holiday science are: Which categories do I need to make holiday-data-packs searchable? A guy from India should find quickly his holiday data pack, and a guy from Sillicon Valley should find his pack as equally fast. It makes most sense to me to filter for COUNTRY STATE WHATEVER. Like a drill-down-search. Did I miss something? What would be the best data format to hold holiday information? A holiday has a start and end date and a name. That should be enough. Would I put all this stuff in thousands of XML files? How would you go about this? Any hint / help is highly welcome! Thanks to everyone!

    Read the article

  • If 'Architect' is a dirty word - what's the alternative; when not everyone can actually design a goo

    - by Andras Zoltan
    Now - I'm a developer first and foremost; but whenever I sit down to work on a big project with lots of interlinking components and areas, I will forward-plan my interfaces, base classes etc as best I can - putting on my Architect hat. For a few weeks I've been doing this for a huge project - designing whole swathes of interfaces etc for a business-wide platform that we're developing. The basic structure is a couple of big projects that consists of service and data interfaces, with some basic implementations of all of these. On their own, these assemblies are useless though, as they are simply intended intended as a scaffold on which to build a business-specific implementation (we have a lot of businesses). Therefore, the design of the core platform is absolutely crucial, since consumers of the system are not intended to know which implementation they are actually using. In the past it's not worked so well, but after a few proof-of-concepts and R&D projects this new platform is now growing nicely and is already proving itself. Then somebody else gets involved in the project - he's a TDD man who sees code-level architecture as an irrelevance and is definitely from the camp that 'architect' is a dirty word - I should add that our working relationship is very good despite this :) He's open about the fact that he can't architect in advance and obviously TDD really helps him because it allows him to evolve his systems over time. That I get, and totally understand; but it means that his coding style, basically, doesn't seem to be able to honour the architecture that I've been putting in place. Now don't get me wrong - he's an awesome coder; but the other day he needed to extend one of his components (an implementation of a core interface) to bring in an extra implementation-specific dependency; and in doing so he extended the core interface as well as his implementation (he uses ReSharper), thus breaking the independence of the whole interface. When I pointed out his error to him, he was dismayed. Being test-first, all that mattered to him was that he'd made his tests pass, and just said 'well, I need that dependency, so can't we put it in?'. Of course we could put it in, but I was frustrated that he couldn't see that refactoring the generic interface to incorporate an implementation-specific feature was just wrong! But it is all very Charlie Brown to him (you know the sound the adults make when they're talking to the children) - as far as he's concerned we don't need to worry about it because we can always refactor. The problem is, the culture of test-write-refactor is all very well and good - but not when you're dealing with a platform that is going to be shared out among so many projects that you could never get them all in one place to make the refactorings work. In my opinion, sometimes you actually have to think about what you're doing, and not just let nature take its course. Am I simply fulfilling the role of Architect as a dirty word here? I believe that architecture is important and should be thought about before code gets written; unless it's a particularly small project. But when you're working in a team of people who don't think that way, or even can't think that way how can you actually get this across? Is it a case of simply making the architecture off-limits to changes by other people? I don't want to start having bloody committees just to be able to grow the system; but equally I don't want to be the only one responsible for it all. Do you think the architect role is a waste of time? Is it at odds with TDD and other practises? Can this mix of different practises be made to work, or should I just be a lot less precious (and in so doing allow a generic platform become useless!)? Or do I just lay down the law? Any ideas/experiences/views gratefully received.

    Read the article

  • Extjs4: editable rowbody

    - by peter
    in my first ExtJs4 project i use a editable grid with the feature rowbody to have a big textfield displayed under each row. I want it to be editable on a dblclick. I succeeded in doing so by replacing the innerHTML of the rowbody by a textarea but the special keys don't do what they are supposed to do (move the cursor). If a use the textarea in a normal field i don't have this probem. Same problem in IE7 and FF4 gridInfo = Ext.create('Ext.ux.LiveSearchGridPanel', { id: 'gridInfo', height: '100%', width: '100%', store: storeInfo, columnLines: true, selType: 'cellmodel', columns: [ {text: "Titel", flex: 1, dataIndex: 'titel', field:{xtype:'textfield'}}, {text: "Tags", id: "tags", flex: 1, dataIndex: 'tags', field:{xtype:'textfield'}}, {text: "Hits", dataIndex: 'hits'}, {text: "Last Updated", renderer: Ext.util.Format.dateRenderer('d/m/Y'), dataIndex: 'lastChange'} ], plugins: [ Ext.create('Ext.grid.plugin.CellEditing', { clicksToEdit: 1 }) ], features: [ { ftype: 'rowbody', getAdditionalData: function(data, idx, record, orig) { var headerCt = this.view.headerCt, colspan = headerCt.getColumnCount(); return { rowBody: data.desc, //the big textfieldvalue, can't use a textarea here 8< rowBodyCls: this.rowBodyCls, rowBodyColspan: colspan }; } }, {ftype: 'rowwrap'} ] }); me.on('rowbodydblclick', function(gridView, el, event, o) { ... rb = td.down('.x-grid-rowbody').dom; var value = rb.innerText?rb.innerText:rb.textContent; rb.innerHTML = ''; Ext.create('Ext.form.field.TextArea', { id: 'textarea1', value : value, renderTo: rb, border: false, enterIsSpecial: true, enableKeyEvents: true, disableKeyFilter: true, listeners: { 'blur': function(el, o) { rb.innerHTML = el.value; }, 'specialkey': function(field, e){ console.log(e.keyCode); //captured but nothing happens } } }).show(); damn, can't publish my own solution, looks like somebody else has to answer, anyway, here is the function that works function editDesc(me, gridView, el, event, o) { var width = Ext.fly(el).up('table').getWidth(); var rb = event.target; var value = rb.innerText?rb.innerText:rb.textContent; rb.innerHTML = '' var txt = Ext.create('Ext.form.field.TextArea', { value : value, renderTo: rb, border: false, width: width, height: 300, enterIsSpecial: true, disableKeyFilter: true, listeners: { 'blur': function(el, o) { var value = el.value.replace('\n','<br>') rb.innerHTML = value; }, 'specialkey': function(field, e){ e.stopPropagation(); } } }); var txtTextarea = Ext.fly(rb).down('textarea'); txtTextarea.dom.style.color = 'blue'; txtTextarea.dom.style.fontSize = '11px'; } Hi Molecule Man, as an alternative to the approach above i tried the Ext.Editor. It works but i want it inline but when i render it to the rowbody, the field blanks and i have no editor, any ideas ? gridInfo = Ext.create('Ext.grid.Panel', { id: 'gridInfo', height: '100%', width: '100%', store: storeInfo, columnLines: true, selType: 'cellmodel', viewConfig: {stripeRows: false, trackOver: true}, columns: [ {text: "Titel", flex: 1, dataIndex: 'titel', field:{xtype:'textfield'}}, ... {text: "Last Updated", renderer: Ext.util.Format.dateRenderer('d/m/Y'), dataIndex: 'lastChange'} ], plugins: [ Ext.create('Ext.grid.plugin.CellEditing', { clicksToEdit: 1 }) ], features: [ { ftype: 'rowbody', getAdditionalData: function(data, idx, record, orig) { var headerCt = this.view.headerCt, colspan = headerCt.getColumnCount(); return { rowBody: data.desc, rowBodyCls: this.rowBodyCls, rowBodyColspan: colspan }; } } ], listeners:{ rowbodyclick: function(gridView, el, event) { //werkt editDesc(this, gridView, el, event); } } }); function editDesc(me, gridView, el, event, o) { var rb = event.target; me.txt = new Ext.Editor({ field: {xtype: 'textarea'}, updateEl: true, cancelOnEsc: true, floating: true, renderTo: rb //when i do this, the field becomes empty and i don't get the editor }); me.txt.startEdit(el); }

    Read the article

  • which control in vs08 aspx c# will be able to take html tags as input and display it formatted accor

    - by user287745
    i asked a few questions regarding this and got answers that indicate i will have to make an costum build editor using ,markdown, .... to achieve something like stackoverflow.com/questions/ask page. there is time limitation and the requirements are that much, i have to achieve 1) allow users to using html tags when giving input 2) save that complete input to sql db 3) display the data in db withing a contol which renders the formatting as the tags specify. i am aware label and literal controls support html tags, the problem is how to allow the user to input the textbox does not seem the support html tags? thank you Need help in implementing the affect of the WRITE A NEW POST On stackoverflow.com <%--The editor--% <asp:UpdatePanel ID="UpdatePanel7" runat="server"> <ContentTemplate> <asp:Label ID="Label4" runat="server" Text="Label"></asp:Label> <div id="textchanging" onkeyup="textoftextbox('TextBox3'); return false;"> <asp:TextBox ID="TextBox3" runat="server" CausesValidation="True" ></asp:TextBox> </div> <%-- OnTextChanged="textoftextbox('TextBox3'); return false;" gives too man literals error. --%> <asp:Button ID="Button6" runat="server" Text="Post The Comment, The New, Write on Wall" onclick="Button6_Click" /> <asp:Label ID="Label2" runat="server" Text="Label"></asp:Label> <asp:TextBox ID="TextBox4" runat="server" CausesValidation="True"></asp:TextBox> <asp:Literal ID="Literal2" runat="server" Text="this must change" here comes the div <div id="displayingarea" runat="server">This area will change</div> </ContentTemplate> </asp:UpdatePanel> <%--The editor ends--%> protected void Button6_Click(object sender, EventArgs e) { Label3.Text = displayingarea.InnerHtml; } As you see, I have implemented the effect of typing text in textbox which appears as a reflection in the div WITH FULLY FORMATTED TEXT ACCORDIG TO THE HTML TAGS USED The textbox does not allow html! Well the text box doesnot have too, script just extracts what is typed, within letting the server know. The html of the div tag also changes as typed in textbox. Now, There is a “post” button within an update plane a avoid full post back of page What I needis when the button is clicked on the value withing the div tag “innerhtml” is passed on to the label. Yes I know the chnages are only made on the client side, So when the button click event occurs the server is not aware of the new data within the div tag, Therefore the server assign the original html withing the div to the lab. Need help to overcome this, How is it that what we enter text in the textbox press the button with coding like label1.text=textbox1.text; and it works even in update panel, but the aabove code for extracting innerhtml typed at users end similar to yping in textbox does not work?

    Read the article

  • Unknown error sourcing a script containing 'typeset -r' wrapped in command substitution

    - by Bernard Assaf
    I wish to source a script, print the value of a variable this script defines, and then have this value be assigned to a variable on the command line with command substitution wrapping the source/print commands. This works on ksh88 but not on ksh93 and I am wondering why. $ cat typeset_err.ksh #!/bin/ksh unset _typeset_var typeset -i -r _typeset_var=1 DIR=init # this is the variable I want to print When run on ksh88 (in this case, an AIX 6.1 box), the output is as follows: $ A=$(. ./typeset_err.ksh; print $DIR) $ echo $A init When run on ksh93 (in this case, a Linux machine), the output is as follows: $ A=$(. ./typeset_err.ksh; print $DIR) -ksh: _typeset_var: is read only $ print $A ($A is undefined) The above is just an example script. The actual thing I wish to accomplish is to source a script that sets values to many variables, so that I can print just one of its values, e.g. $DIR, and have $A equal that value. I do not know in advance the value of $DIR, but I need to copy files to $DIR during execution of a different batch script. Therefore the idea I had was to source the script in order to define its variables, print the one I wanted, then have that print's output be assigned to another variable via $(...) syntax. Admittedly a bit of a hack, but I don't want to source the entire sub-script in the batch script's environment because I only need one of its variables. The typeset -r code in the beginning is the error. The script I'm sourcing contains this in order to provide a semaphore of sorts--to prevent the script from being sourced more than once in the environment. (There is an if statement in the real script that checks for _typeset_var = 1, and exits if it is already set.) So I know I can take this out and get $DIR to print fine, but the constraints of the problem include keeping the typeset -i -r. In the example script I put an unset in first, to ensure _typeset_var isn't already defined. By the way I do know that it is not possible to unset a typeset -r variable, according to ksh93's man page for ksh. There are ways to code around this error. The favorite now is to not use typeset, but just set the semaphore without typeset (e.g. _typeset_var=1), but the error with the code as-is remains as a curiosity to me, and I want to see if anyone can explain why this is happening. By the way, another idea I abandoned was to grep the variable I need out of its containing script, then print that one variable for $A to be set to; however, the variable ($DIR in the example above) might be set to another variable's value (e.g. DIR=$dom/init), and that other variable might be defined earlier in the script; therefore, I need to source the entire script to make sure I all variables are defined so that $DIR is correctly defined when sourcing.

    Read the article

  • Problem with jQuery plugin TinySort

    - by Volmar
    I'm trying to sort a list with the help of jQuery and the TinySort-plugin, and it works good but one thing is not working as i want. My Code is: <!doctype html> <html> <head> <meta charset="UTF-8" /> <title>TinySort problem</title> <script type="text/javascript" src="http://so.volmar.se/www/js/jquery.js"></script> <script type="text/javascript" src="http://so.volmar.se/www/js/jquery.tinysort.min.js"></script> <script type="text/javascript"> function pktsort(way){ if($("div#paket>ul>li.sortdiv>a#s_abc").text() == "A-S"){ $("div#paket>ul>li.sortdiv>a#s_abc").text("S-A"); $("div#paket ul li.sortable").tsort("",{place:"org",returns:true,order:"desc"}); }else{ $("div#paket>ul>li.sortdiv>a#s_abc").text("A-S"); $("div#paket ul li.sortable").tsort("",{place:"org",returns:true,order:"asc"}); } } </script> </head> <body> <div id="paket" title="Paket"> <ul class="rounded"> <li class="sortdiv">Sort: <a href="#" onclick="pktsort();" class="active_sort" id="s_abc">A-S</a></li> <li class="sortable">Almost Famous</li> <li class="sortable">Children of Men</li> <li class="sortable">Coeurs</li> <li class="sortable">Colossal Youth</li> <li class="sortable">Demonlover</li> <li class="sortable">Femme Fatale</li> <li class="sortable">I'm Not There</li> <li class="sortable">In the City of Sylvia</li> <li class="sortable">Into the Wild</li> <li class="sortable">Je rentre à la maison</li> <li class="sortable">King Kong</li> <li class="sortable">Little Miss Sunshine</li> <li class="sortable">Man on Wire</li> <li class="sortable">Milk</li> <li class="sortable">Monsters Inc.</li> <li class="sortable">My Winnipeg</li> <li class="sortable">Ne touchez pas la hache</li> <li class="sortable">Nói albinói</li> <li class="sortable">Regular Lovers</li> <li class="sortable">Shaun of the Dead</li> <li class="sortable">Silent Light</li> <li class="addmore"><b>This text is not supposed to move</b></li> </ul> </div> </body> </html> you can try it out at: http://www.volmar.se/list-prob.html MY PROBLEM IS: I don't want the <li class="addmore"> to move above all the <li class="sortable">-elements when i press the sort-link. i wan't it to always be in the bottom. you can find documentation of the TinySort plugin here. i've tried loads of combinations with place and returns propertys but i just can't get it right.

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82  | Next Page >