Search Results

Search found 23782 results on 952 pages for 'claims based authorizatio'.

Page 536/952 | < Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >

  • Microsoft InTune in a SBS2008 Environment

    - by Richard Craddock
    I'm looking into using InTune and Office 365, mainly for the licensing benefits it brings but also because our company is moving to having more users based around the globe and I feel it will make managing the computers easier. Currently we have a SBS2008 (Premium) server running Exchange and File and Print which I need to keep for File and Print (Exchange can move to Office 365). I cant find any information on how the AD is synchronized between InTune and the SBS server (as users will still needs access control to file shares on the SBS server). Does anyone have any experience in doing this or even know if its possible? Richard

    Read the article

  • How to route all traffic over site to site VPN tunnel?

    - by Hutch
    I have a site to site VPN configured between our main site (Site A) and a remote site (Site B). Site A is 10.60.0.0/16 Site B is 192.168.99.0/24 The firewall in Site B is a Juniper SSG running ScreenOS 6.3 and I'm using a route based VPN. The tunnel works perfectly in that from Site A you can reach 192.168.99.0 via the tunnel, and from Site B you can reach 10.60.0.0 via the tunnel. However, we want it so that if you're in Site B and want the Internet it goes via the firewall at Site A, and right now on the Juniper 0.0.0.0 has the ISP router as next hop. My understanding is that on the Juniper, I can set a route for the /32 public IP at our main site that the VPN tunnel connects to to the ISP router via ethernet0/0 (the SSG's external interface), and then modify the 0.0.0.0 route to use our main site firewall via tunnel.1 (the VPN tunnel). Not sure I've explained that so well but is my understanding correct? Thanks

    Read the article

  • Organizing files relationally in Windows 7?

    - by Cayetano Gonçalves
    I just took a new job as a policy analyst, and after even one week keeping track of hundreds of files- lawsuits, legislation, letters, etc- in Windows 7 is proving difficult. In my last job I was a database architect and I helped build Linux based servers to track files among an entire department, however there is no way for me to do that at this time in this job. Is there any way to track files/indices/locations/tags/themes and store them in some kind of RDBMS system, instead of storing the files in folders that only allow for flat and fixed storage? For example, if I have a file that deals with: ELID organization Appeals court John Smith It really is inconvenient to have to decide which one of these tags to create into a folder and place the file into it, when it falls under all the categories. Even if I could place tags the way you can in Stack Exchange on files, it would solve a lot of heart ache.

    Read the article

  • Can DVCSs enforce a specific workflow?

    - by dukeofgaming
    So, I have this little debate at work where some of my colleagues (which are actually in charge of administrating our Perforce instance) say that workflows are strictly a process thing, and that the tools that we use (in this case, the version control system) have no take on it. In otherwords, the point that they make is that workflows (and their execution) are tool-agnostic. My take on this is that DVCSs are better at encouraging people in more flexible and well-defined ways, because of the inherent branching occurring in the background (anonymous branches), and that you can enforce workflows through the deployment model you establish (e.g. pull requests through repository management, dictator/liutenant roles with their machines setup as servers, etc.) I think in CVCSs you have to enforce workflows through policies and policing, because there is only one way to share the code, while in DVCSs you just go with the flow based on the infrastructure/permissions that were setup for you. Even when I have provided the earlier arguments, I'm still unable to fully convince them. Am I saying something the wrong way?, if not, what other arguments or examples do you think would be useful to convince them? Edit: The main workflow we have been focusing on, because it makes sense to both sides is the Dictator/Lieutenants workflow: My argument for this particular workflow is that there is no pipeline in a CVCS (because there is just sharing work in a centralized way), whereas there is an actual pipeline in DVCSs depending on how you deploy read/write permissions. Their argument is that this workflow can be done through branching, and while they do this in some projects (due to policy/policing) in other projects they forbid developers from creating branches.

    Read the article

  • Cleaning Up After Chrome

    - by Mark Treadwell
    I find Google Chrome, which I have no interest in, is continually getting installed on machines in my house, mostly due to Adobe Shockwave bringing it along as an install package. (Family members are agreeing to the download, not realizing the Chrome is getting dropped as well.) My major issue after uninstalling Chrome is that you can no longer click on links in Outlook emails. There is a lot on the web about this, and Google has not been proactive at fixing their uninstaller. I have now added a registry file to my Win64 systems to reset the problem registry keys and clear the error. This registry file is pretty simple. It merely resets HKEY_CURRENT_USER\Software\Classes\.htm, HKEY_CURRENT_USER\Software\Classes\.html, and HKEY_CURRENT_USER\Software\Classes\.shtml back to their default values of "htmlfile". Chrome takes over the handling of these file extensions because its default install is to make itself the default web browser. The Chrome uninstalled fails to clear/reset them. In troubleshooting this, I looked in my registry based on the web info on the Chrome uninstall problem. Since my system had never had Chrome installed, my registry did not have the problem keys. To troubleshoot, I installed (ugh!) and uninstalled Chrome. Sure enough, Chrome left the expected debris with a value string of "ChromeHTML.PR2EPLWMBQZK3BY7Z2BFBMFERU" or something similar. Resetting these values fixed the problem. I see that Chrome leaves quite a bit of debris behind in the registry. I guess it is creating the keys then leaving them behind, even though their presence (with bad data) subsequently affects operations.

    Read the article

  • Jumbo Frames on DIR-655

    - by Spookyone
    Hello, I am trying to set up jumbo frames on my gigabit home LAN but no luck so far. My setup is: D-Link DIR-655 router, HW Revision A3, Firmware 1.21 EU Synology DS107+, Firmware 3.0-1337 Laptop w/ Win7 x64, external PCIx NIC managed by "Generic Marvel Yukon 88E8053 based Ethernet Controller" The router is supposed to support jumbo frames but doesn't feature any relevant setting. I set the Jumbo Packet value to 9000 on both the NIC and the Synobox but it doesn't work, ping -f -l 8972 says "Packet needs to be fragmented but DF set". Is there any other setting I overlooked, the DIR-655 doesn't actually support jumbo frames, or what else could be the problem?

    Read the article

  • How would one run a task sequence within a task sequence in SCCM 2012 SP1

    - by BigHomie
    A Shining Example: Inside all of my task sequences I have a group that installs driver packages conditionally based on computer model: And of course, this list does nothing but grow. The fact that it grows isn't a big deal, what is a big deal is that every time it changes I have to manually copy and paste those changes across every task sequence I have, which of course leaves huge room for human error. The same goes for other groups of tasks that are common across task sequences. Looking for a solution where I could centrally manage these tasks, be it link other task sequences to a group within another task sequence, or create a separate task sequence and link to that. I came across a solution by John Marcum (SCCM MVP) that mentioned this ability, but this was a while ago and I can't find the link to it anymore to see if it's even still being updated/maintained, but I'm looking for more of a free solution, or even using Powershell or the ConfigMgr SDK is fine with me, I'm no stranger to either. Update Getting close: http://msdn.microsoft.com/en-us/library/jj217869.aspx

    Read the article

  • Optimal way to make MySQL backups for fairly large databases (MyISAM / InnoDB)

    - by WinkyWolly
    Currently we have one beefy MySQL database that runs a couple of high traffic Django based websites as well as some e-commerce websites of decent size. As a result we have a fair amount of large databases using both InnoDB and MyISAM tables. Unfortunately we've recently hit a wall due to the amount of traffic so I've setup another master server to help alleviate reads / backups. Now at the moment I simply use mysqldump with a few arguments and it's proven to be fine.. until now. Obviously mysqldump is a slow quick method however I believe we've outgrown its use. I now need a good alternative and have been looking into utilizing Maatkits mk-parallel-dump utility or an LVM snapshot solution. Succinct short version: I have a fairly large MySQL databases I need to backup Current method using mysqldump is inefficient and slow (causing issues) Looking into something such as mk-parallel-dump or LVM snapshots Any recommendations or ideas would be appreciated - since I have to re-do how we're doing things I rather have it done properly / most efficient :).

    Read the article

  • Are webhosts that require NS instead of a CNAME common?

    - by billpg
    I've just signed up with a webhost (which I prefer not to name) and I'm reasonably happy with it. The only nit was when I was ready to put a site online and I asked the support line to what name I should point my 'www' CNAME to. They responded that they don't do that and I need to set my domain's NS records for the hosting to work. "Why would you ever want to do it that way? Our service to you includes DNS and our servers are probably much better than the one your registrar provides." This was a bit of surprise as all of the other webhosts I've worked with happily support this. I've set up (eg) gallery.myfriend.example for friends by having them configure their DNS to CNAME 'gallery' to the name of a shared server at a webhost and the webhost does name-based hosting for 'gallery.myfriend.example'. (Of course, if the webhost ever tells me I'm being moved from A.webhost.example to B.webhost.example, it would be my responsibility to change where the CNAME points. Really good webhosts would instead create myname.webhost.example for the IP of whichever server my stuff happens to be on, so I'd never have to worry about keeping my CNAME up to date.) Is my impression correct, that most webhosts will happily support a service that begins with a CNAME hosted elsewhere, or is it really more common that webhosts will only provide a service if they control the DNS service too?

    Read the article

  • What determines which Javascript functions are blocking vs non-blocking?

    - by Sean
    I have been doing web-based Javascript (vanilla JS, jQuery, Backbone, etc.) for a few years now, and recently I've been doing some work with Node.js. It took me a while to get the hang of "non-blocking" programming, but I've now gotten used to using callbacks for IO operations and whatnot. I understand that Javascript is single-threaded by nature. I understand the concept of the Node "event queue". What I DON'T understand is what determines whether an individual javascript operation is "blocking" vs. "non-blocking". How do I know which operations I can depend on to produce an output synchronously for me to use in later code, and which ones I'll need to pass callbacks to so I can process the output after the initial operation has completed? Is there a list of Javascript functions somewhere that are asynchronous/non-blocking, and a list of ones that are synchronous/blocking? What is preventing my Javascript app from being one giant race condition? I know that operations that take a long time, like IO operations in Node and AJAX operations on the web, require them to be asynchronous and therefore use callbacks - but who is determining what qualifies as "a long time"? Is there some sort of trigger within these operations that removes them from the normal "event queue"? If not, what makes them different from simple operations like assigning values to variables or looping through arrays, which it seems we can depend on to finish in a synchronous manner? Perhaps I'm not even thinking of this correctly - hoping someone can set me straight. Thanks!

    Read the article

  • Picking the right license

    - by nightcracker
    Hey, I have some trouble with picking the right license for my works. I have a few requirements: Not copyleft like the GNU (L)GPL and allows for redistribution under other licenses Allows other people to redistribute your (modified) work but prevents that other people freely make money off my work (they need to ask/buy a commercial license if they want to) Compatible with the GNU (L)GPL Not responsible for any damage caused by my work Now, I wrote my own little license based on the BSD and CC Attribution-NonCommercial 3.0 licenses, but I am not sure if it will hold in court. Copyright <year> <copyright holder>. All rights reserved. Redistribution of this work, with or without modification, are permitted provided that the following conditions are met: 1. All redistributions must attribute <copyright holder> as the original author or licensor of this work (but not in any way that suggests that they endorse you or your use of the work). 2. All redistributions must be for non-commercial purposes and free of charge unless specific written permission by <copyright holder> is given. This work is provided by <copyright holder> "as is" and any express or implied warranties are disclaimed. <copyright holder> is not liable for any damage arising in any way out of the use of this work. Now, you could help me by either: Point me to an existing license which is satisfies my requirements Confirm that my license has no major flaws and most likely would hold in court Thanks!

    Read the article

  • Increasing load capacity for growing website

    - by markxi
    My website currently runs on a dedicated web server (with LiteSpeed) and dedicated MySQL database server. It's a download based site with a lot of user-generated content, which can be streamed and downloaded, there are also thousands of thumbnails and static content. I'm at the stage where the web server can no longer handle the amount of traffic, so I'm looking a how best to increase capacity considering the large amount of downloadable content. My host suggests mirroring everything on a second web server and distributing the load between them using either DNS Made Easy, or to have my own load balancer (using ldirector) in front of the two web servers. Could anyone advise whether the above method would be the best option? Does any one have any experience with DNS Made Easy and/or ldirector? I'd appreciate any help.

    Read the article

  • DNS not responding

    - by Born2win
    I have a Dlink DIR-615 router and I have it around six mounths. My internet connection is VPN (PPTP) based i.e. I have been give a username and password from my isp and my ip address is dynamic. But from few days I am experiencing a serious problem. My router connects normally (i can see yellow light) but my computer is giving "DNS not responding" error. I have tried everything (reset,reboot etc) but no sucess. Any help would be appreciated :)

    Read the article

  • Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 5)

    - by hinkmond
    So, here's the finished product. I have 8 networked Raspberry Pi devices strategically placed around our Oracle Santa Clara Building 21 office. I attached a JFET transistor based EMF sensor on each device to capture any strange fluctuations in the electromagnetic field (which supposedly, paranormal spirits can change as they pass by). And, I have have a Web app (embedded in this page) which can take the readings and show a graphical display in real-time. As you can see, all the Raspberry Pi devices are blinking away green, indicating they are all operational and all sensors are working correctly. But, I don't see anything... Darn... Maybe, I have to stare at the Web app for a while. I don't know when the "alleged" ghosts in our Oracle Santa Clara office are supposed to be active, but let me know if you see anything... Oh, and by the way, Happy Halloween from the Internet of Spooky Things! See the previous posts for the full series on the steps to this cool demo: Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 1) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 2) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 3) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 4) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 5) Hinkmond

    Read the article

  • How to deal with OpenGL and Fullscreen on OS X

    - by Armin Ronacher
    I do most of my development on OS X and for my current game project this is my target environment. However when I play games I play on Windows. As a windows gamer I am used to Alt+Tab switching from within the game to the last application that was open. On OS X I currently can't find either a game that supports that nor can I find a way to make it possible. My current project is based on SDL 1.3 and I can see that cmd+tab is a sequence that is sent directly to my application and not intercepted by the operating system. Now my first attempt was to hide the rendering window on cmd+tab which certainly works, but has the disadvantage that a hidden OpenGL window in SDL cannot be restored when the user tabs back to the application. First of all, there is no event fired for that or I can't find it, secondly the core problem is that when that application window is hidden, my game is still the active application, just that the window disappeared. That is incredible annoying. Any ideas how to approximate the windows / linux behavior for alt+tab?

    Read the article

  • How SSD hard drive affected speed of your website (asp.net/linq/ms sql database)

    - by Sergey Osypchuk
    I have a small database (<1G) But we have a lot of complex logi? in website and client complains on render time, which is 3-5 seconds. We are not google, and thousands of users a day is our dream, so size is not a problem, but speed is important. Can anybody share with experience with SSD drives for ASP.NET (MVC)/LINQ/MS SQL based application ? How you performance increased? UPDATE: this whitepaper states that it will be 20 times faster. http://www.texmemsys.com/files/f000174.pdf

    Read the article

  • What is a SIP 'Gateway' and how is different from a SIP Proxy/Registrar?

    - by Shrey
    Recently I started looking at SIP implementation for a future work. I was reading (Googling) about what SIP means and how to go about implementing a end-to-end SIP enabled VoIP network. What I did not get is what use does a SIP Gateway is for? How different is it with respect to SIP proxy servers or a SIP DNS/Locator like server? I understand probably QoS would be one primary factor - like dedicating a set bandwidth for SIP/VoIP specific I/O over a network. Anything else? Can anyone help me with any other hints/pointers? I fully understand that is quite a basic question - but I really couldn't find any text which could clear my doubt about what 'Gateway' would mean in SIP context and what differentiates it from other SIP based network components (like Softphones, Proxies etc). Thanks a lot.

    Read the article

  • What You Said: How You Share Your Photos

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your favorite tips, tricks, and tools for sharing photos with friends and family. Now we’re back to highlight the ways HTG readers share their pics. Image available as wallpaper here. By far the most popular method of photo sharing was to upload the pictures to cloud-based storage. Many readers took advantage of sizable SkyDrive accounts. Dragonbite writes: I used to use PicasaWeb (uploaded from Shotwell) until I got the SkyDrive w/25 GB available. My imported pictures are automatically synchronized with SkyDrive and I then send out a link to whomever I want. I have another (desktop) computer where all of the pictures are stored from mine and my wife’s camera’s imports so if I need to free up some space on SkyDrive or my Windows 7 laptop, I double-check they are in the desktop computer before deleting them from my laptop (and thus from SkyDrive as well). I wish SkyDrive enabled some features like rotate, or searching by Tagged person. 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • Implementation Specialist OPN Exam for OBI Suite 11g is Now LIVE

    - by Mike.Hallett(at)Oracle-BI&EPM
    The OPN specialisation Exam for implementation consultants of OBI Suite 11g is now live and ready for all partners.  You can now update your specialisation certification to the latest product version 11g for OBI: until recently, the accreditation had examined skills for OBI 10g. For more details see Oracle Business Intelligence Foundation Suite 11g Essentials (1Z1-591) where you can apply for this Oracle Implementation Specialist credential. This exam is primarily intended for consultants that are skilled in implementing solutions based on Oracle Business Intelligence Foundation Suite 11g. The certification covers skills such as: installing OBIEE, building the BI Server metadata repository, building BI dashboards, constructing ad hoc queries, defining security settings and configuring and managing cache files. The exam targets the intermediate-level implementation team member. Up-to-date training and field experience are recommended.   Also Note the New OPN on-line Sales & Pre-sales Assessment Tests are available @ Oracle Business Intelligence Foundation Suite 11g Sales Specialist Oracle Business Intelligence Foundation Suite 11g PreSales Specialist Oracle Business Intelligence Foundation Suite 11g Support Specialist FREE Certification Testing at OpenWorld ·       Are you attending OPN Exchange @ OpenWorld ? Then join us at OPN Specialist Test Fest! October 1st - 4th 2012, Marriott Marquis Hotel :  Pre-register here now!   For More Information OPN Certified Specialist exams OPN Certified Specialist FAQ Enablement 2.0 Get Specialized!

    Read the article

  • Question about a simple design problem

    - by Uri
    At work I stumbled uppon a method. It made a query, and returned a String based on the result of the query, such as de ID of a customer. If the query didn't return a single customer, it'd return a null. Otherwise, it'd return a String with the ID's of them. It looked like this: String error = getOwners(); if (error != null) { throw new Exception("Can't delete, the flat is owned by: " + error); } ... Ignoring the fact that getCustomers() returns a null when it should instead return an empty String, two things are happening here. It checks if the flat is owned by someone, and then returns them. I think a more readable logic would be to do this: if (isOwned) { throw new Exception("Can't delete, the flat is owned by: " + getOwners()); } ... The problem is that the first way does with one query what I do with two queries to the database. What would be a good solution involving good design and efficiency for this?

    Read the article

  • PRNG test suite: bitstream and stream length

    - by Martin Trigaux
    On the NIST website, there is a tool called sts (Statistical Test Suite) that allow us to rest the validity of a pseudo-random number generator based on a stream of bits in input. When running the program, there is two variables I am not sure to understand : the stream length and number of bitstream. Is the stream length the size of the file ? The number of bit inside ? The size of a bitstream ? Are the bitstreams subset of the whole file ? Chosen how ? Let say I have a text file containing 1,000,000 bits in ascii. What should be my arguments ? You can find the user manual here if needed (I didn't find explanation about what are these variables in it). Thank you

    Read the article

  • A space-efficient guest filesystem for grow-as-needed virtual disks ?

    - by Steve Schnepp
    A common practice is to use non-preallocated virtual disks. Since they only grow as needed, it makes them perfect for fast backup, overallocation and creation speed. Since file systems are usually based on physical disks they have the tendency to use the whole area available1 in order to increase the speed2 or reliability3. I'm searching a filesystem that does the exact opposite : try to touch the minimum blocks need by an aggressive block reuse. I would happily trade some performance for space usage. There is already a similar question, but it is rather general. I have very specific goal : space-efficiency. 1. Like page caching uses all the free physical memory 2. Canonical example : online defragmentation 3. Canonical example : snapshotting

    Read the article

  • Why would searching in Websphere Portal Administration exclude some results?

    - by Scott Leis
    I have logged into a website based on Websphere Portal 6.0, gone to the Administration console, then to Portlet Management - Portlets. At a guess, there are about 200 portlets on this server. 22 portlets have a title starting with "IGM", but if I use the "Title starts with" search option and enter "igm" (or any case variation), only one of these portlets is found. The portlets excluded from the "Title starts with" search are also excluded from the "Title contains" search, but some of them can be found with the "Unique name contains" search (noting that only 5 of these have a unique name). Why would title searches exclude these portlets? I also see similar behaviour in other areas of the Portal administration. E.g. going to Portal Settings - Custom Unique Names - Pages, and performing title searches excludes some results, depending on the search terms entered.

    Read the article

  • two-part dice pool mechanic

    - by bythenumbers
    I'm working on a dice mechanic/resolution system based off of the Ghost/Echo (hereafter shortened to G/E) tabletop RPG. Specifically, since G/E can be a little harsh with dealing out consequences and failure, I was hoping to soften the system and add a little more player control, as well as offer the chance for players to evolve their characters into something unique, right from creation. So, here's the mechanic: Players roll 2d12 against the two statistics for their character (each is a number from 2-11, and may be rolled above or below depending on the nature of the action attempted, rolling your stat exactly always fails). Depending on the success for that roll, they add dice to the pool rolled for a modified G/E style action. The acting player gets two dice anyhow, and I am debating offering a bonus die for each success, or a single bonus die for succeeding on both of the statistic-compared rolls. One the size of the dice pool is set, the entire pool is rolled, and the players are allowed to assign rolled dice to a goal and a danger. Assigned results are judged as follows: 1-4 means the attempted goal fails, or the danger comes true. 5-8 is a partial success at the goal, or partially avoiding the danger. 9-12 means the goal is achieved, or the danger avoided. My concerns are twofold: Firstly, that the two-stage action is too complicated, with two rolls to judge separately before anything can happen. Secondly, that the statistics involved go too far in softening the game. I've run some basic simulations, and the approximate statistics follow: 2 dice (up to) 3 dice (up to) 4 dice failure ~33% ~25% ~20% partial ~33% ~35% ~35% success ~33% ~40% ~45% I'd appreciate any advice that addresses my concerns or offers to refine my simulation (right now the first roll is statistically modeled as sign(1d12-1d12), where 0 is a success).

    Read the article

  • Latest update to Ubuntu 13.10 broke Intel graphics drivers

    - by James Davies
    I'm running a copy of Ubuntu 13.10 on an i7-4771 w/ Intel HD4600 Graphics using a Dell Ultrasharp 1440p monitor via Displayport. Up until today this configuration has been working perfectly, however the latest update appears to have broken my graphics configuration, and xorg is now refusing to go above 1280p resolution. Running xrandr it appears the driver incorrectly thinks my monitor is plugged into the HDMI port and is detecting a max resolution of 1920x1200 instead of 2560x1440. (It's actually plugged in via Displayport). Based on the apt history.log, the latest update was for the kernel. I'm presuming the issue is that the official Intel driver hasn't been updated to support this version? Is there any way to resolve this, or will I need to upgrade to 14.10 to get the latest driver from Intel? Start-Date: 2014-05-28 11:30:57 Commandline: aptdaemon role='role-commit-packages' sender=':1.473' Install: linux-image-extra-3.11.0-22-generic:amd64 (3.11.0-22.38), linux-image-3.11.0-22-generic:amd64 (3.11.0-22.38), linux-headers-3.11.0-22:amd64 (3.11.0-22.38), linux-headers-3.11.0-22-generic:amd64 (3.11.0-22.38)

    Read the article

< Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >