Search Results

Search found 23127 results on 926 pages for 'based'.

Page 508/926 | < Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >

  • Do you tend to write your own name or your company name in your code?

    - by Connell Watkins
    I've been working on various projects at home and at work, and over the years I've developed two main APIs that I use in almost all AJAX based websites. I've compiled both of these into DLLs and called the namespaces Connell.Database and Connell.Json. My boss recently saw these namespaces in a software documentation for a project for the company and said I shouldn't be using my own name in the code. (But it's my code!) One thing to bear in mind is that we're not a software company. We're an IT support company, and I'm the only full-time software developer here, so there's not really any procedures on how we should write software in the company. Another thing to bear in mind is that I do intend on one day releasing these DLLs as open-source projects. How do other developers group their namespaces within their company? Does anyone use the same class libraries in personal and in work projects? Also does this work the other way round? If I write a class library entirely at work, who owns that code? If I've seen the library through from start to finish, designed it and programmed it. Can I use that for another project at home? Thanks, Update I've spoken to my boss about this issue and he agrees that they're my objects and he's fine for me to open-source them. Before this conversation I started changing the objects anyway, which was actually quite productive and the code now suits this specific project more-so than it did previously. But thank you to everyone involved for a very interesting debate. I hope all this text isn't wasted and someone learns from it. I certainly did. Cheers,

    Read the article

  • Identifying the version of ATI Catalyst drivers

    - by Snark
    I have bought a new video card based on the ATI Radeon HD 5670 chipset. I couldn't make it work with the latest ATI Catalyst drivers found on their website, only the drivers found on the CD delivered with the card worked. How can I know the version of Catalyst that is installed on my PC (running Windows 7 64-bits)? The ATI Catalyst Control Center returns the following information: Driver Packaging Version 8.673-091110a-092263C Provider ATI Technologies Inc. 2D Driver Version 8.01.01.973 2D Driver File Path /REGISTRY/MACHINE/SYSTEM/ControlSet001/Control/CLASS/{4D36E968-E325-11CE-BFC1-08002BE10318}/0000 Direct3D Version 8.14.10.0708 OpenGL Version 6.14.10.9120 Catalyst™ Control Center Version 2009.1110.2225.40230 I do not recognize anything pointing to a "marketing version". The website says the current version of Catalyst is 10.2.

    Read the article

  • Efficiently rendering to 3D texture

    - by TravisG
    I have an existing depth texture and some other color textures, and want to process the information in them by rendering to a 3D texture (based on the depth contained in the depth texture, i.e. a point at (x/y) in the depth texture will be rendered to (x/y/texture(depth,uv)) in the 3D texture). Simply doing one manual draw call for each slice of the 3D texture (via glFramebufferTextureLayer) is terribly slow, since I don't know beforehand to what slice of the 3D texture a given texel from one of the color textures or the depth texture belongs. This means the entire process is effectively for each slice for each texel in depth texture process color textures and render to slice So I have to sample the depth texture completely per each slice, and I also have to go through the processing (at least until to discard;) for all texels in it. It would be much faster if I could rearrange the process to for each texel in depth texture figure out what slice it should end up in process color textures and render to slice Is this possible? If so, how? What I'm actually trying to do: the color textures contain lighting information (as seen from light view, it's a reflective shadow map). I want to accumulate that information in the 3D texture and then later use it to light the scene. More specifically I'm trying to implement Cryteks Light Propagation Volumes algorithm.

    Read the article

  • How do I run strace or ltrace on Tomcat Catalina?

    - by flashnode
    Running ltrace isn't trivial. This RHEL 5.3 system has based on a Tomcat Catalina (servlet container) which uses text scripts to tie everything together. When I tried to find an executable here's the rabbit hole I went down: /etc/init.d/pki-ca9 calls dtomcat5-pki-ca9 ]# Path to the tomcat launch script (direct don't use wrapper) TOMCAT_SCRIPT=/usr/bin/dtomcat5-pki-ca9 /usr/bin/dtomcat5-pki-ca9 calls a watchdog program /usr/bin/nuxwdog -f $FNAME I replaced nuxwdog with a wrapper [root@qantas]# cat /usr/bin/nuxwdog #!/bin/bash ltrace -e open -o /tmp/ltrace.$(date +%s) /usr/bin/nuxwdog.bak $@ [root@qantas]# service pki-ca9 start Starting pki-ca9: [ OK ] [root@qantas]# cat /tmp/ltrace.1295036985 +++ exited (status 1) +++ This is ugly. How do I run strace or ltrace in tomcat?

    Read the article

  • .NET Dependency Management Systems

    - by StriplingWarrior
    I have some .NET projects that are starting to get large enough to merit looking into Dependency Management solutions, so we don't have to copy binaries from one project to another. Here's what I've found so far: NPanday is based on a port of Maven. I can't tell how recently it was worked on, but the last release was in May 2011. NuGet seems to be under active development, and it appears to have support directly from Microsoft. Some people complained that it "only addresses dependency resolution," but I don't know what else it should address, or whether it has added more features since that point. It does appear to have recently added the ability to import binaries as part of the build process so we don't have to commit them to our repositories. Refix appears to still be in Beta, after having received no attention since Sept 2011. Would somebody with recent experience using any of these dependency management tools (or any others that work well) share your experience? Is NuGet mature enough to use it for dependency management? If not, what does it lack?

    Read the article

  • Lead Programmer definition clarification

    - by Junaid
    I am working on PHP and MySQL based web application for more than 5 years now. I started my career from Intern - Jr Developer - Software Developer - Sr. Software Engineer [Team Lead] that's what I am nowadays. I was looking at the link at Wikipedia regarding who is a lead programmer. The link states the following: A lead programmer is a software engineer in charge of one or more software projects. Alternative titles include Development Lead, Technical Lead, Senior Software Engineer, Software Design Engineer Lead (SDE Lead), Software Manager, or Senior Applications Developer. When primarily contributing in a high-level enterprise software design role, the title Software Architect (or similar) is often used. All of these titles can have different meanings depending on the context. My current job responsibilities are more or less like a Development Lead and to some extent near Software Architect because I usually design the core structure of new products and managing 2-3 project simultaneously and in the meantime involved in assisting other teams regarding the structural design of their projects, I am usually on call with clients along with project managers, I code most of the time when my team stuck somewhere / workload / integrating some third party API and etc. Primary reason of this writing is to know if I qualify for a Development Lead Title? in accordance with my above mentioned job descriptions?

    Read the article

  • Leveraging NuGet as a central repository for PowerShell modules

    - by cibrax
    We have been working a lot lately with PowerShell as part of our star product at Tellago Studios, “Moesion”. One of the main features we provide in Moesion is the ability to execute PowerShell commands remotely in a given server using a web mobile interface (You can read more in my previous post about Moesion). One of the things we realized in all this time is that PowerShell lacks of a central repository where IT guys or we, the developers, can easily grab and reuse commands.  All the commands or modules are basically spread across multiple places or websites, like personal blogs, TechNet or CodePlex projects to name a few making the search of them very hard. You are usually limited to use your favorite search engine and copy what you find. In addition, there is not an easy way to reuse, extend or version these commands, which also limits any contribution that you could make to the community.  My friend Jose wrote a great post the other day about the importance of reusing PowerShell modules, and what is the mechanism to reuse them. Jose, however, based his post in a custom implementation using a GIT repository for storing the modules. We have NuGet in the .NET platform for sharing and reusing existing libraries or code, so why can’t just leverage it for reusing PowerShell modules as well ?. Some teams in Microsoft are using NuGet for distributing libraries and binaries so it would be a great thing for all of us if they also distribute the scripting interfaces in PowerShell using NuGet. This applies to the .NET OS community as well. In fact, it looks like Andrew Nurse had the same idea and implemented a project for this in BitBucket, PsGet.

    Read the article

  • Intermediate SSL Certificates on Azure Websites

    - by amhed
    I have successfully configured an Extended-Validation Certificate on an Azure Website following this article: http://www.windowsazure.com/en-us/documentation/articles/web-sites-configure-ssl-certificate/ The main (non-technical) stakeholder of the web application went through great lengths to validate that our site is secure. He went to this site to check the validity of our SSL: http://www.whynopadlock.com/ The site throw the following error: `SSL verification issue (Possibly mis-matched URL or bad intermediate cert.). Details: ERROR: no certificate subject alternative name matches`` The certificate is installed using IP Based SSL instead of SNI. This is done this way because some site visitors still use Internet Explorer 8 on Windows XP, which has no support for SNI and throws a security warning. Is my certificate correclty installed? I received three .CRT files from my SSL provider: PrimaryIntermediate.crt SecondaryIntermediate.crt EndCertificate.crt This is how I exported our certificate as a .PFX file to Azure: openssl pkcs12 -export -out myserver.pfx -inkey myserver.key -in myserver.crt

    Read the article

  • Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 5)

    - by hinkmond
    So, here's the finished product. I have 8 networked Raspberry Pi devices strategically placed around our Oracle Santa Clara Building 21 office. I attached a JFET transistor based EMF sensor on each device to capture any strange fluctuations in the electromagnetic field (which supposedly, paranormal spirits can change as they pass by). And, I have have a Web app (embedded in this page) which can take the readings and show a graphical display in real-time. As you can see, all the Raspberry Pi devices are blinking away green, indicating they are all operational and all sensors are working correctly. But, I don't see anything... Darn... Maybe, I have to stare at the Web app for a while. I don't know when the "alleged" ghosts in our Oracle Santa Clara office are supposed to be active, but let me know if you see anything... Oh, and by the way, Happy Halloween from the Internet of Spooky Things! See the previous posts for the full series on the steps to this cool demo: Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 1) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 2) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 3) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 4) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 5) Hinkmond

    Read the article

  • Can DVCSs enforce a specific workflow?

    - by dukeofgaming
    So, I have this little debate at work where some of my colleagues (which are actually in charge of administrating our Perforce instance) say that workflows are strictly a process thing, and that the tools that we use (in this case, the version control system) have no take on it. In otherwords, the point that they make is that workflows (and their execution) are tool-agnostic. My take on this is that DVCSs are better at encouraging people in more flexible and well-defined ways, because of the inherent branching occurring in the background (anonymous branches), and that you can enforce workflows through the deployment model you establish (e.g. pull requests through repository management, dictator/liutenant roles with their machines setup as servers, etc.) I think in CVCSs you have to enforce workflows through policies and policing, because there is only one way to share the code, while in DVCSs you just go with the flow based on the infrastructure/permissions that were setup for you. Even when I have provided the earlier arguments, I'm still unable to fully convince them. Am I saying something the wrong way?, if not, what other arguments or examples do you think would be useful to convince them? Edit: The main workflow we have been focusing on, because it makes sense to both sides is the Dictator/Lieutenants workflow: My argument for this particular workflow is that there is no pipeline in a CVCS (because there is just sharing work in a centralized way), whereas there is an actual pipeline in DVCSs depending on how you deploy read/write permissions. Their argument is that this workflow can be done through branching, and while they do this in some projects (due to policy/policing) in other projects they forbid developers from creating branches.

    Read the article

  • Routing application traffic through specific interface

    - by UnicornsAndRainbows
    Hello All! First question here, so please go easy: I have a debian linux 5.0 server with two public interfaces. I would like to route outbound traffic from one instance of an application via one interface and the second instance through the second interface. There are some challenges: both instances of the application use the same protocol both instances of the application can access the entire internet (can't route based on dest network) I can't change the code of the application I don't think a typical approach to load balancing all traffic is going to work well, because there are relatively few destination servers being accessed in the outbound traffic, and all traffic would really need to be distributed pretty evenly across these relatively few servers. I could probably run two virtualized servers on the box and bind each of them to a different external ip, but I'm looking for a simpler solution, maybe using iproute or iptables? Any ideas for me? Thanks in advance - and I'm happy to answer any questions.

    Read the article

  • How to route all traffic over site to site VPN tunnel?

    - by Hutch
    I have a site to site VPN configured between our main site (Site A) and a remote site (Site B). Site A is 10.60.0.0/16 Site B is 192.168.99.0/24 The firewall in Site B is a Juniper SSG running ScreenOS 6.3 and I'm using a route based VPN. The tunnel works perfectly in that from Site A you can reach 192.168.99.0 via the tunnel, and from Site B you can reach 10.60.0.0 via the tunnel. However, we want it so that if you're in Site B and want the Internet it goes via the firewall at Site A, and right now on the Juniper 0.0.0.0 has the ISP router as next hop. My understanding is that on the Juniper, I can set a route for the /32 public IP at our main site that the VPN tunnel connects to to the ISP router via ethernet0/0 (the SSG's external interface), and then modify the 0.0.0.0 route to use our main site firewall via tunnel.1 (the VPN tunnel). Not sure I've explained that so well but is my understanding correct? Thanks

    Read the article

  • How to get started in the development industry? [closed]

    - by Peter Fren
    My life is coding. I was born in 1982 and my first computer was an amiga. I started learning Amiga BASIC. To cut a long story short, I know many things about several programming languages. Being unemployed(I achieved the german abitur, should be similar to a high school degree and I studied a few semesters of mechanical engineering in 2002(I learned JAVA back then)) I have no idea how to use this ability. I have never done commissional work, every task I solved was based on my own wishes and desires. I do not know how to write a FSD or PRD or put it into code. So the question is, why should anyone hire me? I specialized in kinect development but all jobs I applied for on odesk and similar were awarded to others without me knowing why. I don't know what I should do with my skills professionally. What do you suggest? As this board has weird rules, tell me where to find answers if this is the wrong place.

    Read the article

  • Languages with a clear distinction between subroutines that are purely functional, mutating, state-changing, etc?

    - by CPX
    Lately I've become more and more frustrated that in most modern programming languages I've worked with (C/C++, C#, F#, Ruby, Python, JS and more) there is very little, if any, language support for determining what a subroutine will actually do. Consider the following simple pseudo-code: var x = DoSomethingWith(y); How do I determine what the call to DoSomethingWith(y) will actually do? Will it mutate y, or will it return a copy of y? Does it depend on global or local state, or is it only dependent on y? Will it change the global or local state? How does closure affect the outcome of the call? In all languages I've encountered, almost none of these questions can be answered by merely looking at the signature of the subroutine, and there is almost never any compile-time or run-time support either. Usually, the only way is to put your trust in the author of the API, and hope that the documentation and/or naming conventions reveal what the subroutine will actually do. My question is this: Does there exist any languages today that make symbolic distinctions between these types of scenarios, and places compile-time constraints on what code you can actually write? (There is of course some support for this in most modern languages, such as different levels of scope and closure, the separation between static and instance code, lambda functions, et cetera. But too often these seem to come into conflict with each other. For instance, a lambda function will usually either be purely functional, and simply return a value based on input parameters, or mutate the input parameters in some way. But it is usually possible to access static variables from a lambda function, which in turn can give you access to instance variables, and then it all breaks apart.)

    Read the article

  • Organizing files relationally in Windows 7?

    - by Cayetano Gonçalves
    I just took a new job as a policy analyst, and after even one week keeping track of hundreds of files- lawsuits, legislation, letters, etc- in Windows 7 is proving difficult. In my last job I was a database architect and I helped build Linux based servers to track files among an entire department, however there is no way for me to do that at this time in this job. Is there any way to track files/indices/locations/tags/themes and store them in some kind of RDBMS system, instead of storing the files in folders that only allow for flat and fixed storage? For example, if I have a file that deals with: ELID organization Appeals court John Smith It really is inconvenient to have to decide which one of these tags to create into a folder and place the file into it, when it falls under all the categories. Even if I could place tags the way you can in Stack Exchange on files, it would solve a lot of heart ache.

    Read the article

  • Tips on how to notify a user of new features in your game

    - by brent777
    I have noticed a problem when releasing new features for a game that I wrote for Android and published on Google Play Store. Because my game is "stage-based" - and not a game like Hay Day, for example, where users will just go into the game every day since it can't really be finished - my users are not aware of new features that I release for the game. For example, if I publish a new version of my game and it contains a couple new stages, most of their devices will just auto-update the game and they don't even notice this and think to check out what's new. So this is why an approach like popping open a dialog that showcases the new feature(s) when they open the game for the first time after the update was done is not really sufficient. I am looking for some tips on an approach that will draw my users back into the game and then they could read more detail about new features on such a dialog. I was thinking of something like a notification that tells them to check out the new features after an update is done but I am not sure if this is a good idea. Any suggestions to help me solve this problem would be awesome.

    Read the article

  • How would one run a task sequence within a task sequence in SCCM 2012 SP1

    - by BigHomie
    A Shining Example: Inside all of my task sequences I have a group that installs driver packages conditionally based on computer model: And of course, this list does nothing but grow. The fact that it grows isn't a big deal, what is a big deal is that every time it changes I have to manually copy and paste those changes across every task sequence I have, which of course leaves huge room for human error. The same goes for other groups of tasks that are common across task sequences. Looking for a solution where I could centrally manage these tasks, be it link other task sequences to a group within another task sequence, or create a separate task sequence and link to that. I came across a solution by John Marcum (SCCM MVP) that mentioned this ability, but this was a while ago and I can't find the link to it anymore to see if it's even still being updated/maintained, but I'm looking for more of a free solution, or even using Powershell or the ConfigMgr SDK is fine with me, I'm no stranger to either. Update Getting close: http://msdn.microsoft.com/en-us/library/jj217869.aspx

    Read the article

  • Fast determination of whether objects are onscreen in 2D

    - by Ben Ezard
    So currently, I have this in each object's renderer's update method: float a = transform.position.x * Main.scale; float b = transform.position.y * Main.scale; float c = Camera.main.transform.position.x * Main.scale; float d = Camera.main.transform.position.y * Main.scale; onscreen = a + width - c > 0 && a - c < GameView.width && b + height - d > 0 && b - d < GameView.height; transform.position is a 2D vector containing the game engine's definition of where the object is - this is then multiplied by Main.scale to translate that coordinate into actual screen space Similarly, Camera.main.transform.position is the in-engine representation of where the main camera is, and this is also multiplied by Main.scale The problem is, as my game is tile-based, thousands of these updates get called every frame, just to determine whether or not each object should be drawn - how can I improve this please?

    Read the article

  • DNS not responding

    - by Born2win
    I have a Dlink DIR-615 router and I have it around six mounths. My internet connection is VPN (PPTP) based i.e. I have been give a username and password from my isp and my ip address is dynamic. But from few days I am experiencing a serious problem. My router connects normally (i can see yellow light) but my computer is giving "DNS not responding" error. I have tried everything (reset,reboot etc) but no sucess. Any help would be appreciated :)

    Read the article

  • What determines which Javascript functions are blocking vs non-blocking?

    - by Sean
    I have been doing web-based Javascript (vanilla JS, jQuery, Backbone, etc.) for a few years now, and recently I've been doing some work with Node.js. It took me a while to get the hang of "non-blocking" programming, but I've now gotten used to using callbacks for IO operations and whatnot. I understand that Javascript is single-threaded by nature. I understand the concept of the Node "event queue". What I DON'T understand is what determines whether an individual javascript operation is "blocking" vs. "non-blocking". How do I know which operations I can depend on to produce an output synchronously for me to use in later code, and which ones I'll need to pass callbacks to so I can process the output after the initial operation has completed? Is there a list of Javascript functions somewhere that are asynchronous/non-blocking, and a list of ones that are synchronous/blocking? What is preventing my Javascript app from being one giant race condition? I know that operations that take a long time, like IO operations in Node and AJAX operations on the web, require them to be asynchronous and therefore use callbacks - but who is determining what qualifies as "a long time"? Is there some sort of trigger within these operations that removes them from the normal "event queue"? If not, what makes them different from simple operations like assigning values to variables or looping through arrays, which it seems we can depend on to finish in a synchronous manner? Perhaps I'm not even thinking of this correctly - hoping someone can set me straight. Thanks!

    Read the article

  • Challenge Ends on Friday!

    - by Yolande Poirier
    This is your last chance to win a JavaOne trip. Submit a project video and code for the IoT Developer Challenge by this Friday, May 30.  12 JavaOne trips will be awarded to 3 professional teams and one student team. Members of two student teams will win laptops and certification training vouchers. Ask your last minute questions on the coaching form or the Challenge forum. They will be answered promptly. Your project video should explain how your project works. Any common video format such as mp4, avi, mov is fine. Your project must use Java Embedded - whether it is Java SE Embedded or ME Embedded - with the hardware of your choice, including any devices, boards and IoT technology. The project will be judged based on the project implementation, innovation and business usefulness. More details on the IoT Developer Challenge website  Just for fun! Here is a video of Vinicius Senger giving a tour of his home lab, and showing his boards and gadgets. &lt;span id=&quot;XinhaEditingPostion&quot;&gt;&lt;/span&gt;

    Read the article

  • What You Said: How You Share Your Photos

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your favorite tips, tricks, and tools for sharing photos with friends and family. Now we’re back to highlight the ways HTG readers share their pics. Image available as wallpaper here. By far the most popular method of photo sharing was to upload the pictures to cloud-based storage. Many readers took advantage of sizable SkyDrive accounts. Dragonbite writes: I used to use PicasaWeb (uploaded from Shotwell) until I got the SkyDrive w/25 GB available. My imported pictures are automatically synchronized with SkyDrive and I then send out a link to whomever I want. I have another (desktop) computer where all of the pictures are stored from mine and my wife’s camera’s imports so if I need to free up some space on SkyDrive or my Windows 7 laptop, I double-check they are in the desktop computer before deleting them from my laptop (and thus from SkyDrive as well). I wish SkyDrive enabled some features like rotate, or searching by Tagged person. 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • Increasing load capacity for growing website

    - by markxi
    My website currently runs on a dedicated web server (with LiteSpeed) and dedicated MySQL database server. It's a download based site with a lot of user-generated content, which can be streamed and downloaded, there are also thousands of thumbnails and static content. I'm at the stage where the web server can no longer handle the amount of traffic, so I'm looking a how best to increase capacity considering the large amount of downloadable content. My host suggests mirroring everything on a second web server and distributing the load between them using either DNS Made Easy, or to have my own load balancer (using ldirector) in front of the two web servers. Could anyone advise whether the above method would be the best option? Does any one have any experience with DNS Made Easy and/or ldirector? I'd appreciate any help.

    Read the article

  • Planning milestones and time

    - by Ignas
    I was hired by a marketing company a year ago initially for link building / SEO stuff, but I'm actually a Web developer and took the job just in desperation to have one (I'm still quite young and just finished 2nd year of University). From the 3rd day my boss realised that I'm not into that stuff at all and since he had an idea of a web based app we started to plan it. I estimated that it shouldn't take me longer than two months to do it, but as I was making it we soon realised that we want to add more and more stuff to make it even better. So the development on my own lasted for about 4 months, but then it became an enterprise size app and we hired another programmer to work along me. The guy was awesome at what he did, but because I was assigned to be programmer/project manager I had to set up milestones with deadlines and we missed most of them, because most of the time it was too much work, and my lack of experience kept me setting really optimistic deadlines. We still kept adding features and had changed the architecture of the application twice. My boss is a great guy and he gets that when we add features it expands the time frame in which things should be done so he wasn't angry at me nor the other guy. But I was feeling bad (I still am) that I suck at planning. I gained loads of experience from the programming side, but I still lack the management/planning skills which make me go nuts. So over the last year I have dedicated probably about 8 months of work to this app (obviously my studies affected it) and we're launching as a closed beta this month. So my question is how do I get better at planning/managing a project, how do you estimate the times? What do you take into consideration when setting goals. I'm working alone again because the other guy moved from the city. But I'm sure we'll be hiring to help me maintain it so I need to get better at it. Any hints, points or anything on the topic are appreciated.

    Read the article

  • Email server for huge number of subscriber

    - by bogha
    My question is that my company is thinking of providing a free email account for each of its customers. As a new company we will assume that our corporate email system will be MS Exchange server which will support about 1000 employees. They are asking why not adding the customer list to be a part of Exchange users. My suggestion was to separate the two systems, for the corporate we can use Exchange but for customers (around 30000) we have to use a Linux based system. My only argument was that Linux can be used for enterprise services like this and Microsoft may fail. What do you suggest? And if you are with me on choosing Linux as the server platform, what do you suggest to use as an alternative for Exchange in Linux? Thank you.

    Read the article

< Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >