Search Results

Search found 17782 results on 712 pages for 'questions and answers'.

Page 398/712 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • How can I send a Wake on Wireless LAN (WoWLAN) / Wake on Demand request manually?

    - by pioto
    This is similar to, but not the same as, http://serverfault.com/questions/1721/is-wireless-wake-on-lan-possible. I know it is supposed to be possible. The question is, how do I do whatever the AirPort Basestation will do? All I can find so far is that supposedly I need to send something with Wireless Multimedia Extensions (WMM): Basically, I want to be able to wake up my Mac Mini remotely, probably using my Linux laptop. Does anyone know of a tool to do this? Basic Wake on LAN tools do not seem to be the right thing. I don't need the Sleep Proxy Service bit, because I already know the MAC address of the system I want to wake up.

    Read the article

  • Concurrency pattern of logger in multithreaded application

    - by Dipan Mehta
    The context: We are working on a multi-threaded (Linux-C) application that follows a pipeline model. Each module has a private thread and encapsulated objects which do processing of data; and each stage has a standard form of exchanging data with next unit. The application is free from memory leak and is threadsafe using locks at the point where they exchange data. Total number of threads is about 15- and each thread can have from 1 to 4 objects. Making about 25 - 30 odd objects which all have some critical logging to do. Most discussion I have seen about different levels as in Log4J and it's other translations. The real big questions is about how the overall logging should really happen? One approach is all local logging does fprintf to stderr. The stderr is redirected to some file. This approach is very bad when logs become too big. If all object instantiate their individual loggers - (about 30-40 of them) there will be too many files. And unlike above, one won't have the idea of true order of events. Timestamping is one possibility - but it is still a mess to collate. If there is a single global logger (singleton) pattern - it indirectly blocks so many threads while one is busy putting up logs. This is unacceptable when processing of the threads are heavy. So what should be the ideal way to structure the logging objects? What are some of the best practices in actual large scale applications? I would also love to learn from some of the real designs of large scale applications to get inspirations from!

    Read the article

  • Parse text file on click - and then display

    - by John R
    I am thinking of a methodology for rapid retrieval of code snippets. I imagine an HTML table with a setup like this: one two ... ten one oneTwo() oneTen() two twoOne() twoTen() ... ten tenOne() tenTwo() When a user clicks a function in this HTML table, a snippet of code is shown in another div tag or perhaps a popup window (I'm open to different solutions). I want to maintain only one PHP file named utitlities.php that contains a class called 'util'. This file & class will hold all the functions referenced in the above table (it is also used on various projects and is functional code). A key idea is that I do not want to update the HTML documentation everytime I write/update a new function in utilities.php. I should be able to click a function in the table and have PHP open the utilities file, parse out the apropriate function and display it in an HTML window. Questions: 1) I will be coding this in PHP and JavaScript but am wondering if similar scripts are available (for all or part) so I don't reinvent the wheel. 2) Quick & easy Ajax suggestions appreciated too (probably will use jquery, but am rusty). 3) Methodology for parsing out the functions from the utilities.php file (I'm not to good with regex).

    Read the article

  • Challenge Ends on Friday!

    - by Yolande Poirier
    This is your last chance to win a JavaOne trip. Submit a project video and code for the IoT Developer Challenge by this Friday, May 30.  12 JavaOne trips will be awarded to 3 professional teams and one student team. Members of two student teams will win laptops and certification training vouchers. Ask your last minute questions on the coaching form or the Challenge forum. They will be answered promptly. Your project video should explain how your project works. Any common video format such as mp4, avi, mov is fine. Your project must use Java Embedded - whether it is Java SE Embedded or ME Embedded - with the hardware of your choice, including any devices, boards and IoT technology. The project will be judged based on the project implementation, innovation and business usefulness. More details on the IoT Developer Challenge website  Just for fun! Here is a video of Vinicius Senger giving a tour of his home lab, and showing his boards and gadgets. &lt;span id=&quot;XinhaEditingPostion&quot;&gt;&lt;/span&gt;

    Read the article

  • Switching mdadm to an external bitmap

    - by Oli
    I've just read this in another post about improving RAID5/6 write speeds: After increasing stripe cache & switching to external bitmap, my speeds are 160 Mb/s writes, 260 Mb/s reads. :-D I've already found out how to increase the stripe cache and this worked pretty well but I'd like to know more about an external bitmap. I have an incredibly fast (540MB/s) RAID0 SSD that would do well if a bitmap does what I think it does but I'm still very unsure. I've only known about them as long as I've known this post. A few questions: What is a bitmap (in terms of mdadm)? What are the advantages of an internal bitmap (over external)? What are the advantages of an external bitmap (over internal)? How do I switch between the two? I should add that while this is a I'm-bored-let's-break-something thread, I do value the data stored on the RAID array. If doing this is going to put data at significant risk, please let me know.

    Read the article

  • Algorithm to shoot at a target in a 3d game

    - by Sebastian Bugiu
    For those of you remembering Descent Freespace it had a nice feature to help you aim at the enemy when shooting non-homing missiles or lasers: it showed a crosshair in front of the ship you chased telling you where to shoot in order to hit the moving target. I tried using the answer from http://stackoverflow.com/questions/4107403/ai-algorithm-to-shoot-at-a-target-in-a-2d-game?lq=1 but it's for 2D so I tried adapting it. I first decomposed the calculation to solve the intersection point for XoZ plane and saved the x and z coordinates and then solving the intersection point for XoY plane and adding the y coordinate to a final xyz that I then transformed to clipspace and put a texture at those coordinates. But of course it doesn't work as it should or else I wouldn't have posted the question. From what I notice the after finding x in XoZ plane and the in XoY the x is not the same so something must be wrong. float a = ENG_Math.sqr(targetVelocity.x) + ENG_Math.sqr(targetVelocity.y) - ENG_Math.sqr(projectileSpeed); float b = 2.0f * (targetVelocity.x * targetPos.x + targetVelocity.y * targetPos.y); float c = ENG_Math.sqr(targetPos.x) + ENG_Math.sqr(targetPos.y); ENG_Math.solveQuadraticEquation(a, b, c, collisionTime); First time targetVelocity.y is actually targetVelocity.z (the same for targetPos) and the second time it's actually targetVelocity.y. The final position after XoZ is crossPosition.set(minTime * finalEntityVelocity.x + finalTargetPos4D.x, 0.0f, minTime * finalEntityVelocity.z + finalTargetPos4D.z); and after XoY crossPosition.y = minTime * finalEntityVelocity.y + finalTargetPos4D.y; Is my approach of separating into 2 planes and calculating any good? Or for 3D there is a whole different approach? sqr() is square not sqrt - avoiding a confusion.

    Read the article

  • Macbook Pro (SantaRosa) internal display not detected by graphics card, external monitor OK

    - by BLAU
    My MacBook Pro (2.4 Ghz, Santa Rosa with infamous nVidia card) acts strange. It shows the normal gray screen with Apple logo and animation flawlessly during start up but the internal display goes black without any rendering at all when all is loaded. (shining a light on display show nothing) If an external monitor is connected through the DVI port it will remain black during start up and then show the desktop as the internal display goes black. This happens both while booting to Mountain Lion and Windows XP. I have checked "About my Mac" and only the external display is listed. The same is the case if I use the nVidia Control panel in Windows XP. My questions: Is this a hardware problem or is it related to software maybe even firmware? What controls the display during start up, graphics card or something else?

    Read the article

  • Partner Webcast – Oracle Weblogic 12c for New Projects - 07 Nov 2013

    - by Thanos Terentes Printzios
    Fast-growing organizations need to stay agile in the face of changing customer, business or market requirements. Oracle WebLogic Server 12c is the industry's best application server platform that allows you to quickly develop and deploy reliable, secure, scalable and manageable enterprise Java EE applications.WebLogic Server Java EE applications are based on standardized, modular components. WebLogic Server provides a complete set of services for those modules and handles many details of application behavior automatically, without requiring programming. New project applications are created by Java programmers, Web designers, and application assemblers. Programmers and designers create modules that implement the business and presentation logic for the application. Application assemblers assemble the modules into applications that are ready to deploy on WebLogic Server. Build and run high-performance enterprise applications and services with Oracle WebLogic Server 12c, available in three editions to meet the needs of traditional and cloud IT environments. Join us, in this webcast, as we will show you how WebLogic Server 12c helps you building and deployingenterprise Java EE applications with support for new features for lowering cost of operations, improving performance, enhancing scalability. Agenda Oracle WebLogic Server Introduction Application Development on WebLogic Using Java EE Overview of the Application Deployment Process Monitoring Application Performance Q&A November 07th, 2013 -  9am UTC/11am EET Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour REGISTER NOW For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • Authenticate native mobile app using a REST API

    - by Supercell
    I'm starting a new project soon, which is targeting mobile application for all major mobile platforms (iOS, Android, Windows). It will be a client-server architecture. The app is both informational and transactional. For the transactional part, they're required to have an account and log in before a transaction can be made. I'm new to mobile development, so I don't know how the authentication part is done on these platforms. The clients will communicate with the server through a REST API. Will be using HTTPS ofcourse. I haven't yet decided if I want the user to log in when they open the app, or only when they perform a transaction. I got the following questions: 1) Like the Facebook application, you only enter your credentials when you open the application for the first time. After that, you're automatically signed in every time you open the app. How does one accomplish this? Just simply by encrypting and storing the credentials on the device and sending them every time the app starts? 2) Do I need to authenticate the user for each (transactional) request made to the REST API or use a token based approach? Please feel free to suggest other ways for authentication. Thanks!

    Read the article

  • Graduating soon with a computer science degree, but have unique circumstances [closed]

    - by Donnie
    I joined the Navy in 1998, and was admitted into Nuclear Power Training. I got my electrician's mate certificate, but was put on medical hold when I was in Nuclear Power Training. I was sent to the Naval Hospital, and received a medical (honorable) discharge in the middle of 2000. I decided to stay at home and raise my son, and my girlfriend worked. a few years ago, I decided that I want to work as a programmer, so I went to college and will soon be graduating with a degree in computer science. I hope to finish with a relatively high GPA, 3.8 or 3.9. My question is this: How much, if any, of my Navy experience should I put on my resume? And how do I explain my nine year gap as a stay at home dad? Do I even try to explain it? I know recent college graduates typically have no experience, but obviously I'm not the typical college graduate. Will my long absence from working, or my relatively short duration in the Navy hurt my chances? Should I just put the college on my resume, and hope that HR thinks I'm younger than I am? Obviously, then, my age would show at the interview and there would be questions. Any help is appreciated.

    Read the article

  • How to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Radiation from a UPS

    - by Erel Segal
    In our office, there are frequent eletric shortages that harm my desktop computer, so I wanted to install a UPS. However, my office-mates pointed me to papers talking about hazardous radiation from the UPS. The UPS manufacturers themselves recommend to put the UPS several meters away from humans, which is not possible because our office is small (the power is about 0.5 meters from us). As an alternative to UPS, my office-mates recommended that I switch to a laptop, which has a battery so it's immune to shortages. I have several questions: Is it true that the radiation from a laptop battery is lower than the radiation from a UPS? They do just the same thing - supply power using a battery! If the answer to 1 is yes - is there an alternative way to attach a battery, similar to a laptop battery, to a desktop computer? If the answer to 1 is no - how can I prove this to my office-mates, so that they let me use UPS?

    Read the article

  • icacls, Network Service, and setting ACLs on Windows Server 2008

    - by Ted
    Setting ACLs on Windows Server 2008 via the command line is giving me some problems. As per http://web2.minasi.com/forum/topic.asp?TOPIC%5FID=26907 I've tried all sorts of variations: C:\Windows\system32icacls "D:\Websites\site.com\Web\bin*" /grant 'NT A uthority\NETWORK SERVICE: (OI) (CI)M' C:\Windows\system32icacls "D:\Websites\site.com\Web\bin*" /grant "NETWORK SERVICE": (OI) (CI)M And all variations in between. However, each try leads to i.e. "Invalid parameter "'NETWORK'"" depending on the variation above. As per http://technet.microsoft.com/en-us/library/cc753525%28WS.10%29.aspx (see in comments), it appears that others have experienced the same issue where the same command works on Windows 7/Vista/etc., but not on Windows Server 2008. What's the best way to apply permissions to Network Service account on a directory and/or files via the command line in Windows Server 2008? Especially as there's no way to do multiple file permissions at once via the GUI (see http://serverfault.com/questions/30991/windows-server-2008-change-security-settings-for-multiple-files-at-once).

    Read the article

  • Error 1069 the service did not start due to a logon failure

    - by Si
    Our CruiseControl.NET service on Win2003 Server (VMWare Virtual) was recently changed from a service account to a user account to allow for a new part of our build process to work. The new user has "Log on as a service" rights, verified by checking Local Security Settings - Local Policies - User Rights Assignment, and the user password is set to never expire. The problem I'm facing is every time the service is restarted, I get the 1069 error as described in this questions subject. I have to go into the properties of the service (log on tab) and re-enter the password, even though it hasn't changed, and the user already has the appropriate rights. Once I enter the password apply the changes, a prompt appears telling me that the user has been granted log on as a service rights. The service will then start will no problems. Not a show stopper, but a pain none-the-less. Why isn't the password persisting with the service?

    Read the article

  • What's the proper term for a function inverse to a constructor - to unwrap a value from a data type?

    - by Petr Pudlák
    Edit: I'm rephrasing the question a bit. Apparently I caused some confusion because I didn't realize that the term destructor is used in OOP for something quite different - it's a function invoked when an object is being destroyed. In functional programming we (try to) avoid mutable state so there is no such equivalent to it. (I added the proper tag to the question.) Instead, I've seen that the record field for unwrapping a value (especially for single-valued data types such as newtypes) is sometimes called destructor or perhaps deconstructor. For example, let's have (in Haskell): newtype Wrap = Wrap { unwrap :: Int } Here Wrap is the constructor and unwrap is what? The questions are: How do we call unwrap in functional programming? Deconstructor? Destructor? Or by some other term? And to clarify, is this/other terminology applicable to other functional languages, or is it used just in the Haskell? Perhaps also, is there any terminology for this in general, in non-functional languages? I've seen both terms, for example: ... Most often, one supplies smart constructors and destructors for these to ease working with them. ... at Haskell wiki, or ... The general theme here is to fuse constructor - deconstructor pairs like ... at Haskell wikibook (here it's probably meant in a bit more general sense), or newtype DList a = DL { unDL :: [a] -> [a] } The unDL function is our deconstructor, which removes the DL constructor. ... in The Real World Haskell.

    Read the article

  • OpenGL + Allegro. Moving from software drawing X Y to openGL is confusing

    - by Aaron
    Having a fair bit of trouble. I'm used to Allegro and drawing sprites on a bitmap buffer at X Y coords. Now I've started a test project with OpenGL and its weird. Basically, as far as I know, theirs many ways to draw stuff in OpenGL. At the moment, I think I'm creating a Quad? Whatever that is, and I think Ive given it a texture of a bitmap and them im drawing that: GLuint gl_image; bitmap = load_bitmap("cat.bmp", NULL); gl_image = allegro_gl_make_texture_ex(AGL_TEXTURE_MASKED, bitmap, GL_RGBA); glBindTexture(GL_TEXTURE_2D, gl_image); glBegin(GL_QUADS); glColor4ub(255, 255, 255, 255); glTexCoord2f(0, 0); glVertex3f(-0.5, 0.5, 0); glTexCoord2f(1, 0); glVertex3f(0.5, 0.5, 0); glTexCoord2f(1, 1); glVertex3f(0.5, -0.5, 0); glTexCoord2f(0, 1); glVertex3f(-0.5, -0.5, 0); glEnd(); So yeah. So I got a few questions: Is this the best way of drawing a sprite? Is it suitable? The big question: Can anyone help / Does anyone know any tutorials on this weird coordinate thing? If it even is that. It's vastly different from XY, but I want to learn it. I was thinking maybe I could learn how this weird positioning stuff works, and then write a function to try and translate it to X and Y coords. Thats about it. I'm still trying to figure it all out on my own but any contributions you guys can make would be greatly appreciated =D Thanks!

    Read the article

  • Is there a canonical source supporting "all-surrogates"?

    - by user61852
    Background The "all-PK-must-be-surrogates" approach is not present in Codd's Relational Model or any SQL Standard (ANSI, ISO or other). Canonical books seems to elude this restrictions too. Oracle's own data dictionary scheme uses natural keys in some tables and surrogate keys in other tables. I mention this because these people must know a thing or two about RDBMS design. PPDM (Professional Petroleum Data Management Association) recommend the same canonical books do: Use surrogate keys as primary keys when: There are no natural or business keys Natural or business keys are bad ( change often ) The value of natural or business key is not known at the time of inserting record Multicolumn natural keys ( usually several FK ) exceed three columns, which makes joins too verbose. Also I have not found canonical source that says natural keys need to be immutable. All I find is that they need to be very estable, i.e need to be changed only in very rare ocassions, if ever. I mention PPDM because these people must know a thing or two about RDBMS design too. The origins of the "all-surrogates" approach seems to come from recommendations from some ORM frameworks. It's true that the approach allows for rapid database modeling by not having to do much business analysis, but at the expense of maintainability and readability of the SQL code. Much prevision is made for something that may or may not happen in the future ( the natural PK changed so we will have to use the RDBMS cascade update funtionality ) at the expense of day-to-day task like having to join more tables in every query and having to write code for importing data between databases, an otherwise very strightfoward procedure (due to the need to avoid PK colisions and having to create stage/equivalence tables beforehand ). Other argument is that indexes based on integers are faster, but that has to be supported with benchmarks. Obviously, long, varying varchars are not good for PK. But indexes based on short, fix-length varchar are almost as fast as integers. The questions - Is there any canonical source that supports the "all-PK-must-be-surrogates" approach ? - Has Codd's relational model been superceded by a newer relational model ?

    Read the article

  • Low-res emacs24 icon in application switcher 12.10

    - by MTS
    I recently upgraded to Quantal, and also switched up to emacs24 from 23. Everything is great, except for one thing: the icon in the Application Switcher for emacs24 is a horrible, low-resolution eyesore. Compare the two side-by-side: I've seen a couple of questions addressing issues like this, but they're not quite the same. This one says that it is happening with all icons, but that's clearly not the case here. And this one seems more relevant, but it is talking about Gnome, not Unity. In the comments to the one answer for the second question, it says to look at the icons in /usr/share/icons to see if they are low-resolution, and if so to replace them with better ones. There's a ton of emacs icons, in fact. They are in various subfolders of /usr/share/icons/hicolor and they are in sizes ranging from 16x16 to 128x128, and also there are scaleable .svg versions of the icons too. I noticed that there are no 192x192 or 256x256 versions. But it seems like that shouldn't matter, since emacs23 also didn't have icons in those sizes. Any help would be much appreciated!

    Read the article

  • Non-blocking ORM issues

    - by Nikolay Fominyh
    Once I had question on SO, and found that there are no non-blocking ORMs for my favorite framework. I mean ORM with callback support for asynchronous retrieval. The ORM would be supplied with a callback or some such to "activate" when data has been received. Otherwise ORM needs to be split of in a separate thread to guarantee UI responsiveness. I want to create one, but I have some questions that blocking me from starting development: What issues we can meet when developing ORM? Does word "non-blocking" before word "ORM" will dramatically increase complexity of ORM? Why there are not much non-blocking ORMs around? Update: It looks, that I have to improve my question. We have solutions that already allows us to receive data in non-blocking way. And I believe that not all companies that use such solutions - using raw SQL. We want to create more generic solution, that we can reuse in future projects. What difficulties we can meet?

    Read the article

  • Problem with installing Nvidia display drivers on Ubuntu 13.10

    - by Pascal
    Hello everyone and thank you for taking a look at this topic! I'm currently trying out Ubuntu 13.10 but I keep hitting a wall when it comes to installing a driver. I've tried: sudo apt-get install nvidia-current This resulted in a un-bootable system. The screen just stayed black and the cursor displayed as an 'X'. After that I did had to re-install Ubuntu. The computer I'm using is an Acer-Aspire-V3 with a build in Nvidia geforce GT 630M and also with a Intel HD graphics chip-set (not sure if chip-set is the right word here). "lspci | grep VGA" output: pascal@pascal-Aspire-V3-571G:~$ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108M [GeForce GT 630M] (rev a1) I've searched a bit here and there and found out that it would be wise to mention that this laptop is using (or so I think) Nvidia Optimus, not sure if it will add anything to the subject but at least I'll mention it just to be sure. Now to the questions: Q1 How is this caused and how can I fix it? Q2 What additional information could I provide to help you help me?

    Read the article

  • c# scripting execution with xna (actions take more than 1 frame)

    - by user658091
    I'm trying to figure out how to implement c# scripting into my game (XNA with C#). I will be using C# as the scripting language. My question is, how to call functions that take more than 1 frame to finish? For example: class UserScript : Script { public override void execute(Game game) { //script must wait for dialog to be closed game.openDialog("This is a dialog"); //script should'nt wait for this int goldToGive = 100; goldToGive += 100; game.addGold(goldToGive); // //script should wait for cinematic to end game.startCinematic("name_of_cinematic"); //doesn't wait game.addGold(100); } } I found that you can do that with yield, but I'm not sure if it's the correct way (It's from 2010, the article mentioned no longer exists). http://stackoverflow.com/questions/3540231/implementing-a-simple-xml-based-scripting-language-for-an-xna-game Is yield the answer? If so, can anyone point me any examples/tutorials/books? I haven't found any regarding my situation. If not, what approach should I take? or am I better off with multi-threading?

    Read the article

  • Apache freezing, How to detect which virtual host is getting hit?

    - by mr-euro
    I have a production server that in the last 24 hours has been hard rebooted 4 times due to freezes. Ping is fine but all other services time-out (Apache, SSHd, etc). I have now diagnosed it to Apache running out of memory due to an exorbitant amount of child processes forking suddenly within seconds of starting Apache. Stopping Apache just after rebooting keeps the server stable again. My two questions are: Is there a way to detect which of the vhosts is being suddenly hammered without looking into each vhost's access log one by one? Is there a way to quickly enable/disable vhosts without commenting (#) them all out in httpd.conf?

    Read the article

  • Parse text file on click and display

    - by John R
    I am thinking of a methodology for rapid retrieval of code snippets. I imagine an HTML table with a setup like this: one two ... ten one oneTwo() oneTen() two twoOne() twoTen() ... ten tenOne() tenTwo() When a user clicks a function in this HTML table, a snippet of code is shown in another div tag or perhaps a popup window (I'm open to different solutions). I want to maintain only one PHP file named utitlities.php that contains a class called 'util'. This file & class will hold all the functions referenced in the above table (it is also used on various projects and is functional code). A key idea is that I do not want to update the HTML documentation everytime I write/update a new function in utilities.php. I should be able to click a function in the table and have PHP open the utilities file, parse out the apropriate function and display it in an HTML window. Questions: 1) I will be coding this in PHP and JavaScript but am wondering if similar scripts are available (for all or part) so I don't reinvent the wheel. 2) Quick & easy Ajax suggestions appreciated too (probably will use jquery, but am rusty). 3) Methodology for parsing out the functions from the utilities.php file (I'm not to good with regex).

    Read the article

  • unable to send mail from postfix on Ubuntu 12.04

    - by gilmad
    I'm trying to send an email through Google from my localhost. (via PHP5.3) But Google keeps on blocking my requests. I tried to follow the solutions given to a few similar questions, but for some reason they do not work. I followed these instructions to configure it - http://www.dnsexit.com/support/mailrelay/postfix.html Now for the config data: my main.cf file looks like that: relayhost = [smtp.gmail.com]:587 smtp_fallback_relay = [relay.google.com] smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = my sasl_passwd looks like that: [smtp.gmail.com]:587 [email protected]:password and that is how the mail.log rows look like: Dec 14 10:24:50 COMP-NAME postfix/pickup[5185]: 1C3987E0EDD: uid=33 from= Dec 14 10:24:50 COMP-NAME postfix/cleanup[5499]: 1C3987E0EDD: message-id=<[email protected] Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: 1C3987E0EDD: from=, size=483, nrcpt=1 (queue active) Dec 14 10:24:50 COMP-NAME postfix/smtp[5501]: 1C3987E0EDD: to=, relay=smtp.gmail.com[173.194.70.109]:587, delay=0.61, delays=0.19/0/0.32/0.1, dsn=5.7.0, status=bounced (host smtp.gmail.com[173.194.70.109] said: 530 5.7.0 Must issue a STARTTLS command first. w3sm8024250eel.17 (in reply to MAIL FROM command)) Dec 14 10:24:50 COMP-NAME postfix/cleanup[5499]: C20677E0EDE: message-id=<[email protected] Dec 14 10:24:50 COMP-NAME postfix/bounce[5502]: 1C3987E0EDD: sender non-delivery notification: C20677E0EDE Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: C20677E0EDE: from=<, size=2532, nrcpt=1 (queue active) Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: 1C3987E0EDD: removed

    Read the article

  • "Programming error" exceptions - Is my approach sound?

    - by Medo42
    I am currently trying to improve my use of exceptions, and found the important distinction between exceptions that signify programming errors (e.g. someone passed null as argument, or called a method on an object after it was disposed) and those that signify a failure in the operation that is not the caller's fault (e.g. an I/O exception). As far as I understand, it makes little sense for an immediate caller to actually handle programming error exceptions, he should instead assure that the preconditions are met. Only "outer" exception handlers at task boundaries should catch them, so they can keep the system running if a task fails. In order to ensure that client code can cleanly catch "failure" exceptions without catching error exceptions by mistake, I create my own exception classes for all failure exceptions now, and document them in the methods that throw them. I would make them checked exceptions in Java. Now I have a few questions: Before, I tried to document all exceptions that a method could throw, but that sometimes creates an unwiedly list that needs to be documented in every method up the call chain until you can show that the error won't happen. Instead, I document the preconditions in the summary / parameter descriptions and don't even mention what happens if they are not met. The idea is that people should not try to catch these exceptions explicitly anyway, so there is no need to document their types. Would you agree that this is enough? Going further, do you think all preconditions even need to be documented for every method? For example, calling methods in IDisposable objects after calling Dispose is an error, but since IDisposable is such a widely used interface, can I just assume a programmer will know this? A similar case is with reference type parameters where passing null makes no conceivable sense: Should I document "non-null" anyway? IMO, documentation should only cover things that are not obvious, but I am not sure where "obvious" ends.

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >