Search Results

Search found 34971 results on 1399 pages for 'st even'.

Page 125/1399 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • How to get ip-address out of SPAMHAUS blacklist?

    - by ???????? ????? ???????????
    I frequently read that it is possible to remove individual ip-addresses from SPAMHAUS blacklisting. OK. Here is 91.205.43.252 (91.205.43.251 - 91.205.43.253) used by back3.stopspamers.com (back2.stopspamers.com, back1.stopspamers.com) in geo-cluster on dedicated servers in Switzerland. The queries: http://www.spamhaus.org/query/bl?ip=91.205.43.251 http://www.spamhaus.org/query/bl?ip=91.205.43.252 http://www.spamhaus.org/query/bl?ip=91.205.43.253 tell that: 91.205.43.251 - 91.205.43.253 are all listed in the SBL80808 blacklist And SBL80808 blacklist tells: "Ref: SBL80808 91.205.40.0/22 is listed on the Spamhaus Block List (SBL) 01-Apr-2010 05:52 GMT | SR04 Spamming and now seems this place is involved in other fraud" 91.205.43.251-91.205.43.253 are not listed amongst criminal ip-addresses individually but there is no way to remove it individually from black listing. How to remove this individual (91.205.43.251-91.205.43.253) addresses from SPAMHAUS blacklist? And why the heck SPAMHAUS is blacklisting spam-stopping service? This is only one example of a bunch. My related posts: Blacklist IP database Update: From the answer provided I realized that my question was not even understood. This ip-addresses 91.205.43.251 - 91.205.43.253 are not blacklisted individually, they are blacklisted through its supernet 91.205.40.0/22. Also note that dedicated server, ISP and customer are in much different distant countries. Update2: http://www.spamhaus.org/sbl/sbl.lasso?query=SBL80808#removal tells: "To have record SBL80808 (91.205.40.0/22) removed from the SBL, the Abuse/Security representative of RIPE (or the Internet Service Provider responsible for supplying connectivity to 91.205.40.0/22) needs to contact the SBL Team" There are dozens of "abusers" in that blacklist SBL80808. The company using that dedicated server is not an ISP or RIPE representative to treat these issues. Even if to treat it, it is just a matter of pressing "Report spam" on internet to be again blacklisted, this is fruitless approach. These techniques are broadly used by criminals and spammers, See also this my post on blacklisting. This is just one specific example but there are many-many more.

    Read the article

  • Android development in Unreal with an existing project

    - by user1238929
    I am currently using an Unreal 3 project that has been targeted for multiple devices. Originally, it was targeted for iOS and now I want to try and build it for Android. The project is capable of doing it and I am in the process of testing it. I think I have everything I need in order to build it and launch it for an android device that I have set up and connected to my PC and is recognized by the Android SDK ABD. I am currently trying to build and launch the game through the Unreal Frontend but when I try, I am getting stuck at getting the Unreal Frontend to find my Android device as a platform to debug, like it would with a PC, Xbox360, or PS3. Right now, I am just trying to launch the game to see if I can get it to simply run on an Android device, I'm going to worry about the packaging later. So I have two questions: Am I on the right track in looking at the Unreal Frontend to cook and launch the project on Android or should I look somewhere else? How do I get Unreal to recognize my Android device as a platform to launch on? I would even settle for recognizing an emulator, but that seems even harder.

    Read the article

  • The Cloud is STILL too slow!

    - by harry.foxwell(at)oracle.com
    If you've been in the computing industry sufficiently long enough to remember dialup modems and other "ancient" technologies, you might be tempted to marvel at today's wonderfully powerful multicore PCs, ginormous disks, and blazingly fast networks.  Wow, you're in Internet Nirvana, right!  Well, no, not by a long shot.Considering the exponentially growing expectations of what the Web, that is, "the Cloud", is supposed to provide, today's Web/Cloud services are still way too slow.Already we are seeing cloud-enabled consumer devices that are stressing even the most advanced public network services.  Like the iPad and its competitors, ever more powerful smart-phones, and an imminent hoard of special purpose gadgets such as the proposed "cloud camera" (see http://gdgt.com/discuss/it-time-cloud-camera-found-out-cnr/ ).And at the same time that the number and type of cloud services are growing, user tolerance for even the slightest of download delays is rapidly decreasing.  Ten years ago Web developers followed the "8-Second Rule", (average time a typical Web user would tolerate for a page to download and render).  Not anymore; now it's less than 3 seconds, and only a bit longer for mobile devices (see http://www.technologyreview.com/files/54902/GoogleSpeed_charts.pdf).  How spoiled we've become!Google, among others, recognizes this problem and is working to encourage the development of a faster Web (see http://www.technologyreview.com/web/32338/). They, along with their competitors and ISPs, will have to encourage and support significantly better Web performance in order to provide the types of services envisioned for the Cloud.  How will they do this? Through the development of faster components, better use of caching technologies, and the really tough one - exploiting parallelism. Not that parallel technologies like multicore processors are hard to build...we already have them.  It's just that we're not that good yet at using them effectively.  And if we don't get better, users will abandon cloud-based services...in less than 3 seconds.

    Read the article

  • I'm scared for my technical phone interview for an internship!

    - by Marie
    [EDIT 2.0 ]Hello everyone. This is my second phone interview for a development internship. My very first one was okay, but I didn't get my dream internship. Now, I'm facing fears about this upcoming interview. My fears include the following: I'm 19 years old. The thought of 2 lead developers interviewing me makes me think that I'll know so little of what they'd want me to know. Like they will expect so much. I'm a junior having these panic attacks that I did not get in the other internship. I have a little voice saying "You didn't get the other one. What makes you think you'll get this one?". I'm scared that I'll freeze up, forget everything I know, and stutter like an idiot. I'm still traumatized by the last one, because I really really wanted that internship, and I even studied very hard for it. When I was in the interview, I was so nervous I couldn't think clearly. As a result, I didn't do as well as I know I could have. The minute I hung up, I even thought of a better solution to the interview question! Any tips for a soon-to-be intern (hopefully!)? Thank you! P.S. I'm preparing by using this guide for phone interviews.

    Read the article

  • Booting from USB on Mac Air (using setup_mac_usb_boot.sh)

    - by Mike O
    So, I've been working on this for hours and it's getting a little tiring. As some of you may know, installing Ubuntu on Macs is frequently an adventure, and I'm experiencing that right now. The part I'm hung up on at the moment is making a bootable USB. I would just use a CD, but my laptop is a MacBook Air (which doesn't have a CD drive), and I don't own an external CD drive. I initially attempted to use the command line method supplied by the Ubuntu documentation here: https://help.ubuntu.com/community/How%20to%20install%20Ubuntu%20on%20MacBook%20using%20USB%20Stick However, that wasn't even recognized by rEFIt even when I made a number of different modifications to the process, so I quickly decided to look elsewhere. I came across this guide: https://help.ubuntu.com/community/MacBookAir4-2#Basic_Installation_Instructions This ended up working to a large extent. If I choose the supplied grub from rEFIt, it will bring me to the Ubuntu grub, asking me to try it, install, or check the disk. And if I choose to boot Linux directly from rEFIt, it will bring me to the language selection menu. But when I make my selection from either of these menus it pauses for about ten seconds and then gives me a command line error message. It begins with kernel panic - not syncing timer doesn't work through interrupt, and then shows about eight file names. Does anyone here have any ideas as to what can be causing this? I also tried the script with both Ubuntu 11.10 (the current version when the script was written) and 12.04.

    Read the article

  • How to format FAT32 filesystem infected with windows virus and that is write protected

    - by explorex
    Hi, I have a pendrive with FAT32 filesystem. it is infected with virus dont know which but has autorun.inf and create exe file within folder. I tried to format it with various filesystems and even try to delete it with GParted but couldn't because it says it is write protected i can't even delete files. How to format it? user@explorerx:~$ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xbd04bd04 Device Boot Start End Blocks Id System /dev/sda1 * 1 498 3998720 82 Linux swap / Solaris Partition 1 does not end on cylinder boundary. /dev/sda2 499 19457 152287585+ f W95 Ext'd (LBA) /dev/sda5 5100 10198 40957686 7 HPFS/NTFS /dev/sda6 10199 14787 36861111 7 HPFS/NTFS /dev/sda7 14788 19457 37511743+ 7 HPFS/NTFS /dev/sda8 499 5099 36956160 83 Linux Partition table entries are not in disk order Disk /dev/sdc: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc13bc13b Device Boot Start End Blocks Id System /dev/sdc1 1 9729 78143488 7 HPFS/NTFS /dev/sdc2 9729 19457 78143488 7 HPFS/NTFS Disk /dev/sdb: 4194 MB, 4194304000 bytes 112 heads, 47 sectors/track, 1556 cylinders Units = cylinders of 5264 * 512 = 2695168 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 2 1557 4091904 b W95 FAT32

    Read the article

  • How do I refresh Disk Utility?

    - by detly
    I do a lot of live system building, which eventually involves imaging a USB drive with the built binary image: dd if=binary.img of=/dev/sdX sync ...where /dev/sdX is a USB drive. As part of my workflow, I like to have Ubuntu's Disk Utility open so I can verify the drive letter and unmount anything that gets mounted automatically. I also use it to create extra partitions for persistence. The trouble is, after writing the image to the device — and even after the sync operation — Disk Utility doesn't show the new partition. It just shows free space. GParted sees it and fdisk sees it. Even after closing and opening Disk Utility, it still shows only free space. If I click "Safe Removal" and physically unplug and replug the USB drive, Disk Utility will then see the partition. Why do I need to remove and re-insert the drive for Disk Utility to see the partitions on it? Can I force Disk Utility to update its information without needing to do this? (using Disk Utility 3.0.2 under Ubuntu 11.10.)

    Read the article

  • DSL PPPoE connection not working?

    - by Mussnoon
    I use a wired PPPoE connection to connect to the Internet. What I need to do on Windows to connect to it is put in static IP address, gateway, subnet mask and DNS servers for my LAN card. Next I have to create a dialer for a PPPoE connection, put in my user name, the service name and the password, and "dial" this connection. And it works fine. On Ubuntu 10.04, however, I have tried setting things up in a similar fashion - put in all static addresses for the "automatic" wired connection, then put in user name, service name, password for a "DSL" connection. It worked for a while, then stopped. I have tried putting in all the details within the DSL configuration dialog, same thing happened - it worked for a while, then stopped. I have tried deleting the ethernet connection and only keeping the DSL one with all the numbers put in place, same thing happened - it worked for a while, then stopped. Each of the times, when it connected, it connected randomly, after trying a few times, and either stopped working within a few minutes, or after I had rebooted. I have deleted and remade the connection dozens of times - even with different names, but nothing seems to be working. I have also tried pppoeconf from the terminal, didn't work. I have checked /var/log/kern.log, but nothing changes in the file when I try to connect. I have also checked /sbin/route, but gedit can't even open it (says it can't figure the character encoding...). The "connection established" notification pops up from the top right corner, the same way as when the computer is actually connected to a network. Can anyone figure what's wrong and how it can be solved?

    Read the article

  • 10 Windows Tweaking Myths Debunked

    - by Chris Hoffman
    Windows is big, complicated, and misunderstood. You’ll still stumble across bad advice from time to time when browsing the web. These Windows tweaking, performance, and system maintenance tips are mostly just useless, but some are actively harmful. Luckily, most of these myths have been stomped out on mainstream sites and forums. However, if you start searching the web, you’ll still find websites that recommend you do these things. Erase Cache Files Regularly to Speed Things Up You can free up disk space by running an application like CCleaner, another temporary-file-cleaning utility, or even the Windows Disk Cleanup tool. In some cases, you may even see an old computer speed up when you erase a large amount of useless files. However, running CCleaner or similar utilities every day to erase your browser’s cache won’t actually speed things up. It will slow down your web browsing as your web browser is forced to redownload the files all over again, and reconstruct the cache you regularly delete. If you’ve installed CCleaner or a similar program and run it every day with the default settings, you’re actually slowing down your web browsing. Consider at least preventing the program from wiping out your web browser cache. Enable ReadyBoost to Speed Up Modern PCs Windows still prompts you to enable ReadyBoost when you insert a USB stick or memory card. On modern computers, this is completely pointless — ReadyBoost won’t actually speed up your computer if you have at least 1 GB of RAM. If you have a very old computer with a tiny amount of RAM — think 512 MB — ReadyBoost may help a bit. Otherwise, don’t bother. Open the Disk Defragmenter and Manually Defragment On Windows 98, users had to manually open the defragmentation tool and run it, ensuring no other applications were using the hard drive while it did its work. Modern versions of Windows are capable of defragmenting your file system while other programs are using it, and they automatically defragment your disks for you. If you’re still opening the Disk Defragmenter every week and clicking the Defragment button, you don’t need to do this — Windows is doing it for you unless you’ve told it not to run on a schedule. Modern computers with solid-state drives don’t have to be defragmented at all. Disable Your Pagefile to Increase Performance When Windows runs out of empty space in RAM, it swaps out data from memory to a pagefile on your hard disk. If a computer doesn’t have much memory and it’s running slow, it’s probably moving data to the pagefile or reading data from it. Some Windows geeks seem to think that the pagefile is bad for system performance and disable it completely. The argument seems to be that Windows can’t be trusted to manage a pagefile and won’t use it intelligently, so the pagefile needs to be removed. As long as you have enough RAM, it’s true that you can get by without a pagefile. However, if you do have enough RAM, Windows will only use the pagefile rarely anyway. Tests have found that disabling the pagefile offers no performance benefit. Enable CPU Cores in MSConfig Some websites claim that Windows may not be using all of your CPU cores or that you can speed up your boot time by increasing the amount of cores used during boot. They direct you to the MSConfig application, where you can indeed select an option that appears to increase the amount of cores used. In reality, Windows always uses the maximum amount of processor cores your CPU has. (Technically, only one core is used at the beginning of the boot process, but the additional cores are quickly activated.) Leave this option unchecked. It’s just a debugging option that allows you to set a maximum number of cores, so it would be useful if you wanted to force Windows to only use a single core on a multi-core system — but all it can do is restrict the amount of cores used. Clean Your Prefetch To Increase Startup Speed Windows watches the programs you run and creates .pf files in its Prefetch folder for them. The Prefetch feature works as a sort of cache — when you open an application, Windows checks the Prefetch folder, looks at the application’s .pf file (if it exists), and uses that as a guide to start preloading data that the application will use. This helps your applications start faster. Some Windows geeks have misunderstood this feature. They believe that Windows loads these files at boot, so your boot time will slow down due to Windows preloading the data specified in the .pf files. They also argue you’ll build up useless files as you uninstall programs and .pf files will be left over. In reality, Windows only loads the data in these .pf files when you launch the associated application and only stores .pf files for the 128 most recently launched programs. If you were to regularly clean out the Prefetch folder, not only would programs take longer to open because they won’t be preloaded, Windows will have to waste time recreating all the .pf files. You could also modify the PrefetchParameters setting to disable Prefetch, but there’s no reason to do that. Let Windows manage Prefetch on its own. Disable QoS To Increase Network Bandwidth Quality of Service (QoS) is a feature that allows your computer to prioritize its traffic. For example, a time-critical application like Skype could choose to use QoS and prioritize its traffic over a file-downloading program so your voice conversation would work smoothly, even while you were downloading files. Some people incorrectly believe that QoS always reserves a certain amount of bandwidth and this bandwidth is unused until you disable it. This is untrue. In reality, 100% of bandwidth is normally available to all applications unless a program chooses to use QoS. Even if a program does choose to use QoS, the reserved space will be available to other programs unless the program is actively using it. No bandwidth is ever set aside and left empty. Set DisablePagingExecutive to Make Windows Faster The DisablePagingExecutive registry setting is set to 0 by default, which allows drivers and system code to be paged to the disk. When set to 1, drivers and system code will be forced to stay resident in memory. Once again, some people believe that Windows isn’t smart enough to manage the pagefile on its own and believe that changing this option will force Windows to keep important files in memory rather than stupidly paging them out. If you have more than enough memory, changing this won’t really do anything. If you have little memory, changing this setting may force Windows to push programs you’re using to the page file rather than push unused system files there — this would slow things down. This is an option that may be helpful for debugging in some situations, not a setting to change for more performance. Process Idle Tasks to Free Memory Windows does things, such as creating scheduled system restore points, when you step away from your computer. It waits until your computer is “idle” so it won’t slow your computer and waste your time while you’re using it. Running the “Rundll32.exe advapi32.dll,ProcessIdleTasks” command forces Windows to perform all of these tasks while you’re using the computer. This is completely pointless and won’t help free memory or anything like that — all you’re doing is forcing Windows to slow your computer down while you’re using it. This command only exists so benchmarking programs can force idle tasks to run before performing benchmarks, ensuring idle tasks don’t start running and interfere with the benchmark. Delay or Disable Windows Services There’s no real reason to disable Windows services anymore. There was a time when Windows was particularly heavy and computers had little memory — think Windows Vista and those “Vista Capable” PCs Microsoft was sued over. Modern versions of Windows like Windows 7 and 8 are lighter than Windows Vista and computers have more than enough memory, so you won’t see any improvements from disabling system services included with Windows. Some people argue for not disabling services, however — they recommend setting services from “Automatic” to “Automatic (Delayed Start)”. By default, the Delayed Start option just starts services two minutes after the last “Automatic” service starts. Setting services to Delayed Start won’t really speed up your boot time, as the services will still need to start — in fact, it may lengthen the time it takes to get a usable desktop as services will still be loading two minutes after booting. Most services can load in parallel, and loading the services as early as possible will result in a better experience. The “Delayed Start” feature is primarily useful for system administrators who need to ensure a specific service starts later than another service. If you ever find a guide that recommends you set a little-known registry setting to improve performance, take a closer look — the change is probably useless. Want to actually speed up your PC? Try disabling useless startup programs that run on boot, increasing your boot time and consuming memory in the background. This is a much better tip than doing any of the above, especially considering most Windows PCs come packed to the brim with bloatware.     

    Read the article

  • How can i be sure that professional programming is not for me ?

    - by user17766
    Hello everybody I love programming, developing projects for hobby and learning new concepts. I am getting harder too much in current job Despite learnt many thing well. I can even hardly understand assigned tasks. I am asking why i am getting harder to myself. It may not my fault? Our architecture doesn't spend enough time to explain complicated sides of project for us or i am not enough smart one for understand fastly. Our architecture also doesn't know what kind of hell he is creating ? Seeing 3 level generic types and 4-5 level generic inheritance in domain model objects hell makes me think so really. It looks abusing concepts more than reduce complexity. Thinking that he hasn't experienced before such a big project while he is getting confused in problems of the project. May i am not in right company ? May i am not good programmer ? May i am really stupid ? Become good in programming concepts is not enough to deal big project's complications so someone should to tell me that i have to still effort too much even i am good programmer for adopting myself to any big project ? Also i had another bad experiences from previous job but my professional experiences is almost few months but i spend 2 years for learning and coding for fun and i really can say that i have well skills on OOP, Design Patterns, coding standards and deep knowledge in language currently used. Sometimes i am thinking to leave programming professionally and work in any lame job while doing programming just for hobby. Waiting suggestions and insights

    Read the article

  • Using Scrum on small projects where Owner doesn't want to be involved

    - by Andrej Mohar
    Recently I've been reading and learning quite a lot about scrum and I like it a lot. However, I do have a couple of likely scenarios in my head to which I don't know the solution. So let's say that I might want to organize an agile team of (for instance) four web developers (one of them UI/UX designer). This team would operate on scrum principles. Initially we would probably be working on projects like landing pages for ordinary people's small businesses, like renting apartments, selling cookies... Such customers simply can't be set with Product Owner role (IMHO), because they usually expect to hire a company, give them the overall project goal with some details, and then expect the job to be done (including a lot of decision making) with as little of their involvement as possible (in their opinion, they have more important things to do). Let's say I'd like to engage myself in a developer/scrum master role (I know that even that is debatable, being a team member and scrum master at once), so I simply shouldn't take the role of the product owner as well. So as for my questions: If I'm my company's business owner, do I simply need to be a product owner as well (do these roles include each other)? Can I employ a sales person which might have the product owner role? Would it be better if it is an experienced developer instead of a sales person? Is this even a smart move? Lastly, is there another agile approach that might better suit my position? EDIT: Thank you everyone for good inputs. I added some comments, any aditional info will be greatly appreciated.

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • Oracle on Oracle: Is that all?

    - by Darin Pendergraft
    On October 17th, I posted a short blog and a podcast interview with Chirag Andani, talking about how Oracle IT uses its own IDM products. Blog link here. In response, I received a comment from reader Jaime Cardoso ([email protected]) who posted: “- You could have talked about how by deploying Oracle's Open standards base technology you were able to integrate any new system in your infrastructure in days. - You could have talked about how by deploying federation you were enabling the business side to keep all their options open in terms of companies to buy and sell while maintaining perfect employee and customer's single view. - You could have talked about how you are now able to cut response times to your audit and security teams into 1/10th of your former times Instead you spent 6 minutes talking about single sign on and self provisioning? If I didn't knew your IDM offer so well I would now be wondering what its differences from Microsoft's offer was. Sorry for not giving a positive comment here but, please your IDM suite is very good and, you simply aren't promoting it well enough” So I decided to send Jaime a note asking him about his experience, and to get his perspective on what makes the Oracle products great. What I found out is that Jaime is a very experienced IDM Architect with several major projects under his belt. Darin Pendergraft: Can you tell me a bit about your experience? How long have you worked in IT, and what is your IDM experience? Jaime Cardoso: I started working in "serious" IT in 1998 when I became Netscape's technical specialist in Portugal. Netscape Portugal didn't exist so, I was working for their VAR here. Most of my work at the time was with Netscape's mail server and LDAP server. Since that time I've been bouncing between the system's side like Sun resellers, Solaris stuff and even worked with Sun's Engineering in the making of an Hierarchical Storage Product (Sun CIS if you know it) and the application's side, mostly in LDAP and IDM. Over the years I've been doing support, service delivery and pre-sales / architecture design of IDM solutions in most big customers in Portugal, to name a few projects: - The first European deployment of Sun Access Manager (SAPO – Portugal Telecom) - The identity repository of 5/5 of the Biggest Portuguese banks - The Portuguese government federation of services project DP: OK, in your blog response, you mentioned 3 topics: 1. Using Oracle's standards based architecture; (you) were able to integrate any new system in days: can you give an example? What systems, how long did it take, number of apps/users/accounts/roles etc. JC: It's relatively easy to design a user management strategy for a static environment, or if you simply assume that you're an <insert vendor here> shop and all your systems will bow to that vendor's will. We've all seen that path, the use of proprietary technologies in interoperability solutions but, then reality kicks in. As an ISP I recall that I made the technical decision to use Active Directory as a central authentication system for the entire IT infrastructure. Clients, systems, apps, everything was there. As a good part of the systems and apps were running on UNIX, then a connector became needed in order to have UNIX boxes to authenticate against AD. And, that strategy worked but, each new machine required the component to be installed, monitoring had to be made for that component and each new app had to be independently certified. A self care user portal was an ongoing project, AD access assumes the client is inside the domain, something the ISP's customers (and UNIX boxes) weren't nor had any intention of ever being. When the Windows 2008 rollout was done, Microsoft changed the Active Directory interface. The Windows administrators didn't have enough know-how about directories and the way systems outside the MS world behaved so, on the go live, things weren't properly tested and a general outage followed. Several hours and 1 roll back later, everything was back working. But, the ISP still had to change all of its applications to work with the new access methods and reset the effort spent on the self service user portal. To keep with the same strategy, they would also have to trust Microsoft not to change interfaces again. Simply by putting up an Oracle LDAP server in the middle and replicating the user info from the AD into LDAP, most of the problems went away. Even systems for which no AD connector existed had PAM in them so, integration was made at the OS level, fully supported by the OS supplier. Sun Identity Manager already had a self care portal, combined with a user workflow so, all the clearances had to be given before the account was created or updated. Adding a new system as a client for these authentication services was simply a new checkbox in the OS installer and, even True64 systems were, for the first time integrated also with a 5 minute work of a junior system admin. True, all the windows clients and MS apps still went to the AD for their authentication needs so, from the start everybody knew that they weren't 100% free of migration pains but, now they had a single point of problems to look at. If you're looking for numbers: - 500K directory entries (users) - 2-300 systems After the initial setup, I personally integrated about 20 systems / apps against LDAP in 1 day while being watched by the different IT teams. The internal IT staff did the rest. DP: 2. Using Federation allows the business to keep options open for buying and selling companies, and yet maintain a single view for both employee and customer. What do you mean by this? Can you give an example? JC: The market is dynamic. The company that's being bought today tomorrow will be sold again. Companies that spread on different markets may see the regulator forcing a sale of part of a company due to monopoly reasons and companies that are in multiple countries have to comply with different legislations. Our job, as IT architects, while addressing the customers and employees authentication services, is quite hard and, quite contrary. On one hand, we need to give access to all of our employees to the relevant systems, apps and resources and, we already have marketing talking with us trying to find out who's a customer of the bough company but not from ours to address. On the other hand, we have to do that and keep in mind we may have to break up all that effort and that different countries legislation may became a problem with a full integration plan. That's a job for user Federation. you don't want to be the one who's telling your President that he will sell that business unit without it's customer's database (making the deal worth a lot less) or that the buyer will take with him a copy of your entire customer's database. Federation enables you to start controlling permissions to users outside of your traditional authentication realm. So what if the people of that company you just bought are keeping their old logins? Do you want, because of that, to have a dedicated system for their expenses reports? And do you want to keep their sales (and pre-sales) people out of the loop in terms of your group's path? Control the information flow, establish a Federation trust circle and give access to your apps to users that haven't (yet?) been brought into your internal login systems. You can still see your users in a unified view, you obviously control if a user has access to any particular application, either that user is in your local database or stored in a directory on the other side of the world. DP: 3. Cut response times of audit and security teams to 1/10. Is this a real number? Can you give an example? JC: No, I don't have any backing for this number. One of the companies I did system Administration for has a SOX compliance policy in place (I remind you that I live in Portugal so, this definition of SOX may be somewhat different from what you're used to) and, every time the audit team says they'll do another audit, we have to negotiate with them the size of the sample and we spend about 15 man/days gathering all the required info they ask. I did some work with Sun's Identity auditor and, from what I've been seeing, Oracle's product is even better and, I've seen that most of the information they ask would have been provided in a few hours with the help of this tool. I do stand by what I said here but, to be honest, someone from Identity Auditor team would do a much better job than me explaining this time savings. Jaime is right: the Oracle IDM products have a lot of business value, and Oracle IT is using them for a lot more than I was able to cover in the short podcast that I posted. I want to thank Jaime for his comments and perspective. We want these blog posts to be informative and honest – so if you have feedback for the Oracle IDM team on any topic discussed here, please post your comments below.

    Read the article

  • Manager keeps changing requirement specification after every demo.

    - by Jungle Hunter
    Background of my working environment My manager has no background or understanding of computers or software whatsoever. It is highly likely he hasn't seen code in any form (not even from a physical distance of 10 feet or less) in his life. There is no one who understands the complexity of what I am asked to implement. To the point that if I semi-hardcode no one would know. On Joel's test we score an unbelievable score 0. The problems The manager and at times other "senior" keep changing the requirement specification. Changes which, if good engineering be done and not patchy "fixes", require change in the underlying design. There is absolutely no one who looks at code (probably because no one knows how to, or even if it should be done) which means no one will ever be able to: Appreciate the complexity of the problem or the elegance of the solution. Suggest improvement to the approach. Appreciate the quality of the code. Point out where the code can be improved. A lot of jargon is used which makes sense grammatically but fails to make any sense any other way. Doesn't feel, behave or work like a software company. The question What should be done? Especially regarding there being no one who would point out improvements in my code. Update To answer HLGEM's (and possibly others) question about what I've done to try and fix it. I offered to set up Redmine and introduce source control to everyone. I said I would recommend distributed (git or mercurial) but will also talk about centralized ones and let the team decide. Response was that things are being done and will be done within weeks. Haven't seen that nor am I aware if other parts of the company use it.

    Read the article

  • Multithreading in Windows Phone 7 emulator: A bug

    - by Laurent Bugnion
    Multithreading is supported in Windows Phone 7 Silverlight applications, however the emulator has a bug (which I discovered and was confirmed to me by the dev lead of the emulator team): If you attempt to start a background thread in the MainPage constructor, the thread never starts. The reason is a problem with the emulator UI thread which doesn’t leave any time to the background thread to start. Thankfully there is a workaround (see code below). Also, the bug should be corrected in a future release, so it’s not a big deal, even though it is really confusing when you try to understand why the *%&^$£% thread is not &$%&%$£ starting (that was me in the plane the other day ;) This code does not work: public partial class MainPage : PhoneApplicationPage { public MainPage() { InitializeComponent(); SupportedOrientations = SupportedPageOrientation.Portrait | SupportedPageOrientation.Landscape; var counter = 0; ThreadPool.QueueUserWorkItem(o => { while (true) { Dispatcher.BeginInvoke(() => { textBlockListTitle.Text = (counter++).ToString(); }); } }); } } This code does work: public MainPage() { InitializeComponent(); SupportedOrientations = SupportedPageOrientation.Portrait | SupportedPageOrientation.Landscape; var counter = 0; ThreadPool.QueueUserWorkItem(o => { while (true) { Dispatcher.BeginInvoke(() => { textBlockListTitle.Text = (counter++).ToString(); }); // NOTICE THIS LINE!!! Thread.Sleep(0); } }); } Note that even if the thread is started in a later event (for example Click of a Button), the behavior without the Thread.Sleep(0) is not good in the emulator. As of now, i would recommend always sleeping when starting a new thread. Happy coding: Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Working with Legacy code #5: The blackhole.

    - by andrewstopford
    Someone creates a class or series of classes for something, the classes are big in size with large complicated methods. The effort is a sea of technical debt for the entire team but in the thick of the daily chaos it is lost. With out the coder talking to the team, with no team code policy and no code reviews (and action points) it remains. Pretty soon the team forget about that code. A few weeks\months\years goes by, some of the team may have left, some may remain but business asks for the team to add to that code. The team is now looking at a black hole, no one knows how it works, what it does, what it is for, it is a smelly hell hole and the deadline is fast approaching. The team now tries to change the code, with no approach at unit tests or refactoring in fear of breaking the black hole the team do just that and the business have just lost money. If you are faced with a black hole you need to look back over my series, even a black hole in what might seem like a clean unit tested application. Don't be fooled into thinking that legacy code does not apply to your code base.  The next stage is don't let blackholes in your codebase. Effective code reviews, team communication and good overal team coding policies will really help. Even if you are faced with a deadline do not let them appear, stop, take stock, what can be done and who can help. If you allow them through they will grow and grow and grow and the technical debt will hit you like a tidal wave soon enough,.  

    Read the article

  • Does TDD really work for complex projects?

    - by Amir Rezaei
    I’m asking this question regarding problems I have experienced during TDD projects. I have noticed the following challenges when creating unit tests. Generating and maintaining mock data It’s hard and unrealistic to maintain large mock data. It’s is even harder when database structure undergoes changes. Testing GUI Even with MVVM and ability to test GUI, it takes a lot of code to reproduce the GUI scenario. Testing the business I have experience that TDD works well if you limit it to simple business logic. However complex business logic is hard to test since the number of combinations of tests (test space) is very large. Contradiction in requirements In reality it’s hard to capture all requirements under analysis and design. Many times one note requirements lead to contradiction because the project is complex. The contradiction is found late under implementation phase. TDD requires that requirements are 100% correct. In such cases one could expect that conflicting requirements would be captured during creating of tests. But the problem is that this isn’t the case in complex scenarios. I have read this question: Why does TDD work? Does TDD really work for complex enterprise projects, or is it practically limit to project type?

    Read the article

  • How to manage a multiplayer asynchronous environment in a game

    - by Phil
    I'm working on a game where players can setup villages, which can contain defending units. Any of these units (each on their own tiles) can be set to "campaign" which means they are no longer defending but can now be used to attack other villages. And each unit on a tile can have up to a 100 health. So far so good. Oh and it's all asynchronous so even though the server will be aware that your village is being attacked, you won't be until the attack is over. The issue I'm struggling with, is the following situation. Let's say a unit on a tile is being attacked by a player from another village. The other player see's your village and is attacking your units. You don't know this is happening though, so you set your unit to campaign and off you go to attack another village, with the unit which itself is actually being attacked by this other player. The other player stops attacking your village and leaves your unit with say a health of 1, which is then saved to the server. You however have this same unit are attacking another village with it, but now you discover that even though it started off with a 100 health, now mysteriously it only has 1... Solutions? Ideas? Edit The simplest solutions are often the best. I referred to Clash of clans below, well after a bit more digging it seems that in CoC you can only attack players that are offline! ha, that almost solves the problem. I say almost because there's still the situation where a players village could be in the process of being attacked when they come back online, still need to address that. Edit 2 A solution to the "What happens when a player is attacking your village and you come online" issue, could be the attacking player just get's kicked out of the village at that point and just get's whatever they had won up to that point, it's a bit of a fudge but it might work.

    Read the article

  • Box2d - Attaching a fired arrow to a moving enemy

    - by Satchmo Brown
    I am firing an arrow from the player to moving enemies. When the arrow hits the enemy, I want it to attach exactly where it hit and cause the enemy (a square) to tumble to the ground. Excluding the logistics of the movement and the spin (it already works), I am stuck on the attaching of the two bodies. I tried to weld them together initially but when they fell, they rotated in opposite directions. I have figured that a revolute joint is probably what I am after. The problem is that I can't figure out a way to attach them right where they collide. Using code from iforce2d: b2RevoluteJointDef revoluteJointDef; revoluteJointDef.bodyA = m_body; revoluteJointDef.bodyB = m_e->m_body; revoluteJointDef.collideConnected = true; revoluteJointDef.localAnchorA.Set(0,0);//the top right corner of the box revoluteJointDef.localAnchorB.Set(0,0);//center of the circle b2RevoluteJoint m_joint = *(b2RevoluteJoint*)m_game->m_world->CreateJoint( &revoluteJointDef ); m_body->SetLinearVelocity(m_e->m_body->GetLinearVelocity()); This attaches them but in the center of both of their points. Does anyone know how I would go about getting the exact point of collision so I can link these? Is this even the right method of doing this? Update: I have the exact point of collision. But I still am not sure this is even the method I want to go about this. Really, I just want to attach body A to B and have body B unaffected in any way.

    Read the article

  • "Integratable" but not "integrated" GPL

    - by mgibsonbr
    There has been much debate over whether or not merely linking to a piece of code makes it a derivative work. I know FSF says "yes", so according to them I can't dynamically link a non-GPL compatible program to a GPL library and distribute the whole. But I could do that for private use, as long as no code is released to the public. That made me wonder: what if I don't redistribute the GPL code at all? If my program can work alone (reinforcing my claim that it's not a derivative work), but can do more if the GPL library is also installed to the system, couldn't I just release my application under my own licensing terms - without including any GPL code - and post instructions for anyone interested to separately download the GPL code and do the integration "for their private use"? I know it's against the "spirit" of the GPL, so I'm not suggesting it's a good idea to do that. However, this question is bugging me for some time, specially because of the implications of each answer: If I can not do that: can I write another library with a similar API? (before answering "of course you can", remember that having the same API would allow both libraries to be swapped at will by my customers - so I don't need to work too hard on my library or even make it "working". How to determine if a similar program is just similar or is a circumvention attempt?) If I can do that: can I also be paid to perform the service of installing the GPL library for a customer? (I sell them my program, install it in their machines, download and install the GPL library too) can I put the two programs in the same website? In two different CDs? (I know I said the idea was not to redistribute the GPL code, I'm just thinking in excuses people could use to claim they're not redistributing even though they are)

    Read the article

  • ASP.NET hosting: better, faster, cheaper

    - by Fabrice Marguerie
    After seven years with webhost4life, it was time to move on. Especially because of all the troubles with webhost4life due to their internal migration to a new hosting environment (the company has been bought out).I've just moved all my websites elsewhere. I'm now using Arvixe and OrcsWeb.I use OrcsWeb for metaSapiens.com. OrcsWeb kindly offers me free ASP.NET hosting because I'm a Microsoft MVP. I'd like to publicly thank OrcsWeb for this, and I invite you to have a look at what they have to offer.I use Arvixe for all my other websites, the major ones being SharpToolbox.com, JavaToolbox.com, AxToolbox.com, Proagora.com, LinqInAction.net, ClairDeBulle.com, and madgeek.com.Moving all my websites wasn't a walk in the park, but it was well worth it. Let's consider what I get with Arvixe:Unlimited diskspaceUnlimited data transferUnlimited domainsDedicated application poolsUnlimited POP3 and IMAP mailboxesUnlimited SQL Server 2008 databasesUnlimited MySQL 5 databases.NET 1.1, 2, 3.5 and 4Full trustIIS 7Daily backups A powerful and easy to use control panelAnd more!All of this for $8 per month. If you don't need all of the above features, you can even get an offer as cheap as $5 per month.You can even get better rates if you use coupon codes, such as TOPHOST (30% discount) or MVCHOSTING (20% discount).All in all, I paid only $134 for two years for a great hosting service!Maybe it's time for you to move too?Disclaimer: the links to OrcsWeb and Arvixe are affiliate links that may bring me some money home if you sign up.

    Read the article

  • Is Joel Test really a good gauging tool?

    - by henry
    I just learned about Joel Test. I have been computer programmer for 22 years, but somehow never heard about it before. I consider my best job so far to be this small investment managing company with 30 employees and only 3 people in IT department. I am no longer with them but I had being working there for 5 years – my longest streak with any given company. To my surprise they scored extremely poor on Joel Test. The only two questions I would answer “yes” are #4: Do you have a bug database? And #9: Do you use the best tools money can buy? Everything else is either “sometimes” or straight “no”. Here is what I liked about the company however: a) Good pay, they bragged about it to my face and I bragged about it to their face, so it was almost like a family environment. b) I always knew big picture. When writing a code to solve particular problem there were no ambiguity about the business nature of that problem. Even though we did not always had written specs we could ask business users a question anytime, often yelling it across the floor. I could even talk to executives any time I felt like doing it: no appointment necessary. c) Immediate feedback. Once we implement a solution and make business users happy they immediately let us know that, we (programmers) become heroes of the moment. d) No red tape. I could always buy any tools I deem necessary, and design solutions the way my professional judgment dictates. e) Flexibility. If I had mid-day dental appointment that is near my house rather than near the office, I would send email to the company: "FYI: I work from home today". As long as one of 3 IT guys was on the floor (to help traders in case their monitors go dark) they did not care where 2 others are. So the question thus becomes how valuable Joel Test is? Why bother with it?

    Read the article

  • The Inebriator: A DIY Arduino-powered Cocktail Machine [Video]

    - by Jason Fitzpatrick
    We’ve seen our fair share of geeky alcohol-related projects over the years but this, this, is something to see. The Inebriator is the slickest automated drink machine we’ve laid eyes on. Before we even start talking specs, check out the video above–don’t forget to first assure your liver that you don’t actually have access to such a magnificent device and it can remain calm. As dazzled as we were? The whole thing is powered by an Arduino Mega 2560, sports a cooler with nitrogen pressurization of drink mix bottles, and relies on a rather ingenious and elegantly simple 12v valve system. It even has an RFID security system to prevent party goers with a few drinks in them from messing up the custom drink menu. Hit up the link below for more information, including photos of the guts and technical specs. The Inebriator [via Hack A Day] Java is Insecure and Awful, It’s Time to Disable It, and Here’s How What Are the Windows A: and B: Drives Used For? HTG Explains: What is DNS?

    Read the article

  • SO-overflow induced passivity - how to cope?

    - by Ruben
    After not really working on my pet project for a while, I discovered Stackoverflow and upon perusing it more intensely I was quite amazed. I'm a bit of a perfectionist, so when I found eye-openers here highlighting many of the mistakes I made, I first wanted to fix everything. However, it's a pet project for a reason: I'm self-taught and I'm studying psychology, so programming skills can never become priority one (though it often helps, even in this field). Issues that stuck out were numerous security issues (e.g. CSRF-prevention and bcrypt eluded me) not object-oriented (at least the PHP part, the JS-part mostly is) no PHP framework used, so many of my DIY takes on commonly-tackled components (auth, ...) are either bad or inefficient really poor MySQL usage (no prepared statements, mysql extension, heard about setting proper indices two days ago) using mootools even though JQuery seems to be fashionable, so there's more probably always going to be better integration with services I'd like to use (like google visualization) So, my SO-induced frenzy turned into passivity. I can't do it all (soon) in the rather small amount of spare time I can spend on working on my project. I can leave some of the issues be in good conscience (speed stuff: an unfinished & unpublished project will never become popular, right?). No clear conscience without good security though and if I don't use a framework for auth and other complex stuff I'll regret having to do it myself. One obvious answer would probably be going open-source, but I think the project would need to become more impressive before others would commit to it. I can't afford to employ someone either. I do think the project deserves being worked on, though. How should I tackle it anyway? What's the best practice for little-practice people?

    Read the article

  • Why do you hate Java? Is it the language or the framework? [closed]

    - by zneak
    According to you all, Java is the third most-hated language here. The two other most hated languages are PHP and VBScript. (It's quite funny how they stand together on the podium.) I'd like to make it known that the question mostly addresses people who don't like Java. I assume here a number of subjective opinions as facts because they're usually considered true among people who don't like Java, and I don't want to be convinced otherwise here. If you're a Java enthusiast, you might find this question frustrating. It's never been made clear if people hate Java itself, or if they hate it because of the framework, or if it's a mixture of the two. On a side you have the language, where you have: the "everything should be an object" philosophy, even in instances where it should obviously be something else (event handlers I'm pointing you); checked exceptions; the idea that all logic should be presented as methods and properties is a big no-no; the fact that "closures" created by anonymous types only include final variables and arguments, but will allow write access to any member of the parent class; a few more. On the other side, you have the JDK, with... its load of inconsistencies and overengineering; monolithic class hierarchies; meaningless base exceptions like IOException (though other frameworks have similar exception hierarchies); sluggish responsiveness even with Swing; a few more. My question is, do you think that, if either one (Java or the JDK) was taken alone, and the other was dropped in favor of something else, the new combination would be better? For instance, if you could use the C# syntax with the JDK (adapting get*/set* methods into properties, and interfaces with only one method into delegates), or the Java syntax with the .NET Framework (doing the inverse transformations), would things get better in your opinion?

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >