Search Results

Search found 15914 results on 637 pages for 'physical security'.

Page 298/637 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • ?????????

    - by ???02
    ??·???????????????????????????????????IT????????????????????????·????????????????????????????????????????????????????????????????????????????????????ID???????????????·??????????????????????????????????????????????????????????????????????????????????????????????? ???????????·???????????????????????????ID??·??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????ID??·????????????????????????????ID????????????ID?????????????????????????????????·??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????Oracle Database Security???????????????????????????????????????????????????4?????????????????????4??????????????????????????????·??????????????Database Firewall???????????? ??????·??????????????? Database Firewall ??????????ID??·?????? ???????????????ID????????????????????????????? ID???????????ID?????????ID???????????????????????(????·?????)??????(??????????)ID?? ?????????????Identity Management???????????Oracle Direct ???????·???????????????????????????????????????? ????????????????????????????????? ????????????????????? ?????????????????????????????????????? Oracle Direct

    Read the article

  • ???????/?????!!?????? ~OracleDatabase????~

    - by Yusuke.Yamamoto
    ????? ??:2011/08/24 ??:??????/?? Oracle Database ?????????????????????????????????????????? ????????????????Oracle Database ???????????:Oracle Advanced Security/ ????????????????????????????? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/Ango_08240930.wmv http://www.oracle.com/technetwork/jp/ondemand/db-basic/20100824-encryption-251749-ja.pdf

    Read the article

  • Removing and adding persistent stores to a core data application

    - by mkko
    I'm using core data on an iPhone application. I have multiple persisntent stores that I'm switching from one to another so that only one of the stores can be active at the time. I have one managed object context and the different persistent stores are similar in data format (sqlite) and share the same managed object model. I'm importing the data to each persistent store from a respective XML file. For the first import everything works fine, but after I remove the imported data (the persistent store and the physical file) and then re-import, core data gives me an error: *** Terminating app due to uncaught exception 'NSObjectInaccessibleException', reason: 'The NSManagedObject with ID:0x3c14e00 <x-coredata://6D14F11E-2EA7-4141-9BE8-53747DE6FCC6/Book/p2> has been invalidated.' This error comes from the save: of NSManagedObjectContext. Before re-importing, i'm removing the persistent store from the persistent store coordinator and removing the physical file, so everything should be as if re-importing was done for the first time. Alos, the objects in managed object context are removed and the context is sent the reset: message (I don't know if this is actually needed). Could some one help me out here? How should the persistent store be switched? I'm basically using the same logic as tutored here: http://blog.sallarp.com/iphone-core-data-uitableview-drill-down/ Thanks in advance.

    Read the article

  • Segment register, IP register and memory addressing issue!

    - by Zia ur Rahman
    In the following text I asked two questions and I also described that what I know about these question so that you can understand my thinking. Your precious comments about the below text are required. Below is the Detail of 1ST Question As we know that if we have one mega byte memory then we need 20 bits to address this memory. Another thing is each memory cell has a physical address which is of 20 bits in 1Mb memory. IP register in IAPX88 is of 16 bits. Now my point of view is, we can not access the memory at all by the IP register because the memory need 20 bit address to be addressed but the IP register is of 16 bits. If we have a memory of 64k then IP register can access this memory because this memory needs 16 bits to be addressed. But incase of 1mb memory IP can’t.tell me am i right or not if not why? Suppose physical address of memory is 11000000000000000101 Now how can we access this memory location by 16 bits. Below is the detail of Next Question: My next question is , suppose IP register is pointing to memory location, and the segment register is also pointing to a memory location (start of the segment), the memory is of 1MB, how we can access a memory location by these two 16 bit registers tell me the sequence of steps how the 20 bits addressable memory location is accessed . If your answer is, we take the segment value and we shift it left by 4 bits and then add the IP value into it to get the 20 bits address, then this raises another question that is the address bus (the address bus should be 20 bits wide), the registers both the segment register and the IP register are of 16 bits each , now if address bus is 20 bits wide then this means that the address bus is connected to both these registers. If its not the case then another thing that comes into my mind is that both these registers generate a 20 bit address and there would be a register which can store 20 bits and this register would be connected to both these register and the address bus as well.

    Read the article

  • Object oriented design of game in Java: How to handle a party of NPCs?

    - by Arvanem
    Hi folks, I'm making a very simple 2D RPG in Java. My goal is to do this in as simple code as possible. Stripped down to basics, my class structure at the moment is like this: Physical objects have an x and y dimension. Roaming objects are physical objects that can move(). Humanoid objects are roaming objects that have inventories of GameItems. The Player is a singleton humanoid object that can hire up to 4 NPC Humanoids to join his or her party, and do other actions, such as fight non-humanoid objects. NPC Humanoids can be hired by the Player object to join his or her party, and once hired can fight for the Player. So far I have given the Player class a "party" ArrayList of NPC Humanoids, and the NPC Humanoids class a "hired" Boolean. However, my fight method is clunky, using an if to check the party size before implementing combat, e.g. public class Player extends Humanoids { private ArrayList<Humanoids> party; // GETTERS AND SETTERS for party here //... public void fightEnemy(Enemy eneObj) { if (this.getParty().size() == 0) // Do combat without party issues else if (this.getParty().size() == 1) // Do combat with party of 1 else if (this.getParty().size() == 2) // Do combat with party of 2 // etc. My question is, thinking in object oriented design, am I on the right track to do this in as simple code as possible? Is there a better way?

    Read the article

  • In WMI, can I use a join (or something similar) to acquire the IisWebServer object for a site, given

    - by Precipitous
    Given a server name and a physical path, I'd like to be able to hunt down the IISWebServer object and ApplicationPool. Website url is also an acceptable input. Our technologies are IIS 6, WMI, and access via C# or Powershell 2. I'm certain this would be easier with IIS 7 its managed API. We don't have that yet. Here's what I can do: Get a list of IIS virtual directories from IISWebVirtualDirSetting and filter (offline) for the matching physical path. $theVirtualDir = gwmi -Namespace "root/MicrosoftIISv2" ` -ComputerName $servername -authentication PacketPrivacy ` -class "IISWebVirtualDirSetting" ` | where-object {$_.Path -like $deployLocation} From the virtual directory object, I can get a name (like W3SVC/40565456/root). Given this name, I can get to other goodies, such as the IIS web server object. gwmi -Namespace "root/MicrosoftIISv2" ` -ComputerName $servername ` -authentication PacketPrivacy ` -Query "SELECT * FROM IisWebServer WHERE Name='W3SVC/40589473'" The questions, restated: 1) This is a query language. Can I join or subquery so that 1 WMI query statement gets web servers based on IISWebVirtualDir.Path? How? 2) In solving 1, you'll have to explain how to query on the Path property. Why is this an invalid query? "SELECT * FROM IISWebVirtualDirSetting WHERE Path='D:\sites\globaldominator'"

    Read the article

  • linux new/delete, malloc/free large memory blocks

    - by brian_mk
    Hi folks, We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons. Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'. This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc. The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available. If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system. Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system. BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request. Cheers, Brian.

    Read the article

  • Segmentation in Linux : Segmentation & Paging are redundant?

    - by claws
    Hello, I'm reading "Understanding Linux Kernel". This is the snippet that explains how Linux uses Segmentation which I didn't understand. Segmentation has been included in 80 x 86 microprocessors to encourage programmers to split their applications into logically related entities, such as subroutines or global and local data areas. However, Linux uses segmentation in a very limited way. In fact, segmentation and paging are somewhat redundant, because both can be used to separate the physical address spaces of processes: segmentation can assign a different linear address space to each process, while paging can map the same linear address space into different physical address spaces. Linux prefers paging to segmentation for the following reasons: Memory management is simpler when all processes use the same segment register values that is, when they share the same set of linear addresses. One of the design objectives of Linux is portability to a wide range of architectures; RISC architectures in particular have limited support for segmentation. All Linux processes running in User Mode use the same pair of segments to address instructions and data. These segments are called user code segment and user data segment , respectively. Similarly, all Linux processes running in Kernel Mode use the same pair of segments to address instructions and data: they are called kernel code segment and kernel data segment , respectively. Table 2-3 shows the values of the Segment Descriptor fields for these four crucial segments. I'm unable to understand 1st and last paragraph.

    Read the article

  • Windows Azure - access webrole local storage from separate workerrole

    - by Brett Smith
    I'm running an application on windows azure, the MVC views need to be dynamic, I started by storing them as records in the database, but am quite keen to move them to a physical location. My concept was to create the physical file via code... which worked great and speeds up the page load dramatically. This was of course before I realised that the files were only available for the duration of the role Next I looked at a start up task to create the files when the role was started - however I then realised that any separate instances weren't going to sync up unless I monitored the database for changes. So I moved from a start up task to a function in the run method of the role that checks the database every 10 minutes to see if changes have occurred. The problem is that this seems to choke up the application (at least in the warm up stage). Ideally I would like to move the run function to it's own worker role that can sit there and push files out to web role instances, but I'm unsure on how I would go about accessing the web roles local storage from the worker role. Can anybody tell me whether this is actually possible? and hopefully point me in the right direction to achieve this? Just to clarify what I'm trying to achieve -View is created in user interface running on web role and stored in database -Separate web role (front end) has clientside application with virtualpath provider pointing Views requests to local storage (localresource) -separate worker role to create View structure and load this into clientside web role local storage

    Read the article

  • Bulletin board - Database optimisation

    - by andrew
    This question is a follow on from this Question The project and problem The project I am currently working on is a bulletin board for a large non-profit organisation. The bulletin board will be used to allow inter-office communication within the organisation. I am building the application and have been having trouble extracting the results that I need from my database because I don't think it is properly normalized and because of limitations in my knowledge of relational database theory and mysql. I would appreciate input into the design of the board in general and in particular, ways that the database structure can be improved to facilitate efficient queries and help me develop this application and future application faster Business Logic The bulletin board will be used in the following way Posting bulletins and responses to bulletins Employees or 'users' in offices around the country will be able to post messages to the bulletin board.Bulletins must be posted to a location and categorised- i'll call these "bulletins". Users will be able to post any number of replies to any one bulletin and users will be able to reply to their own bulletin - i'll call these 'replies'. Rating bulletins and replies Users will be able to either 'like' or 'dislike' a bulletin or a reply and the total number of likes or dislikes will be shown for each bulletin or reply. Viewing the bulletin board and responses Bulletins can be displayed chronologically. Users can sort bulletins chronologically or chronologically by the latest reply to that bulletin(let me know if you need more explanation) When a particular bulletin is selected, replies to that bulletin will be displayed chronologically @PerformanceDBA - edited 10:34 est 28/12/10I have begun implementing the data model. I assume that the 6th data model is the physical model because it contains the associative tables. I am going to post any questions that I have below. I will put up a database dump once I am done. I will then put up a list of all the queries that I need to run on the database and begin writing them. I hope you had a good Christmas. I'm in Canada and there's snow! Implementation of Physical model

    Read the article

  • Preventing a heavy process from sinking in the swap file

    - by eran
    Our service tends to fall asleep during the nights on our client's server, and then have a hard time waking up. What seems to happen is that the process heap, which is sometimes several hundreds of MB, is moved to the swap file. This happens at night, when our service is not used, and others are scheduled to run (DB backups, AV scans etc). When this happens, after a few hours of inactivity the first call to the service takes up to a few minutes (consequent calls take seconds). I'm quite certain it's an issue of virtual memory management, and I really hate the idea of forcing the OS to keep our service in the physical memory. I know doing that will hurt other processes on the server, and decrease the overall server throughput. Having that said, our clients just want our app to be responsive. They don't care if nightly jobs take longer. I vaguely remember there's a way to force Windows to keep pages on the physical memory, but I really hate that idea. I'm leaning more towards some internal or external watchdog that will initiate higher-level functionalities (there is already some internal scheduler that does very little, and makes no difference). If there were a 3rd party tool that provided that kind of service is would have been just as good. I'd love to hear any comments, recommendations and common solutions to this kind of problem. The service is written in VC2005 and runs on Windows servers.

    Read the article

  • What is the correct high level schema.org microdata itemtype for a retail brand/company homepage?

    - by kpowz
    I'd like to hear which schema.org itemtype others would recommend using or have used in the case of completing a retail brand's company homepage microdata. Take for example TOMS's shoes: Example #1 - Using /Corporation as the high-level itemtype one can include a lot of great /Organization microdata, but nothing about the retail store. <html itemscope='itemscope' itemtype="http://schema.org/Website> <head></head> <body itemscope='itemscope' itemtype="http://schema.org/Corporation> various microdata here probably including Product microdata </body> </html> NOTE: the only schema.org property specific to /Corporation is tickerSymbol & TOMS doesn't have one. Example #2 - This code would work if TOMS started their own channel of physical retail stores & each location had it's own homepage. However, for TOMS's.com, although accurate schematically & more descriptive at the face, this is incorrect microdata markup for TOMS.com, because /ShoeStore derives from /LocalBusiness - which must represent a physical place. <html itemscope='itemscope' itemtype='http://schema.org/Website'> <head></head> <body itemscope='itemscope' itemtype='http://schema.org/ShoeStore'> a whole bunch of jabber here </body> </html> NOTE: Since TOMS is virtual & thus can't be a /Store this means you lose really cool properties like 'currenciesAccepted', 'paymentAccepted' & 'priceRange'. Is this just a 'sit and wait' situation until more schemas are approved for 'virtual places' or is there a validation-passing way to get the best of both worlds?

    Read the article

  • Need some clarification on the ANSI/SPARC 3-tier database architecture.

    - by Moonshield
    Hi there, I'm currently revising for a databases exam and looking over some past papers, but there's one question that I'm slightly unsure about and was wondering if someone could offer some assistance. "Describe EACH of the THREE levels of the ANSI SPARC 3 level architecture. Your answer should include the purpose of EACH of the schemas, the level of abstraction they provide and the software tools that would be used to access and support them." As I understand it (although please correct me if I'm wrong): the internal schema specifies the physical storage of the data; the conceptual schema specifies the structure of the database and the domains; and the external schemas are how the database is viewed by "users" (applications, etc.). As for the abstraction, I understand that the conceptual layer means that the physical data storage can be altered without the end user being affected, likewise the The bit that I'm not sure about is what tools are used to access and support each layer. Would the internal schema be handled by the DBMS, the conceptual schema handled by some sort of DDL interpreter and the external schema handled by a DML interpreter (or have I misunderstood what each level does)? Any assistance would be greatly appreciated. Thanks, Moonshield

    Read the article

  • ASP.Net Session Storage provider in 3-layer architecture

    - by Tedd Hansen
    I'm implementing a custom session storage provider in ASP.Net. We have a strict 3-layer architecture and therefore the session storage needs to go through the business layer. Presentation-Business-Database. The business layer is accessed through WPF. The database is MSSQL. What I need is (in order of preference): A commercial/free/open source product that solves this. The source code of a SqlSessionStateStore (custom session store) (not the ODBC-sample on MSDN) that I can modify to use a middle layer. I've tried looking at .Net source through Reflector, but the code is not usable. Note: I understand how to do this. I am looking for working samples, preferably that has been proven to work fine under heavy load. The ODBC sample on MSDN doesn't use the (new?) stored procs that the build in SqlSessionStateStore uses - I'd like to use these if possible (decreases traffic). Edit1: To answer Simons question on more info: ASP.Net Session()-object can be stored in either InProc, ASP.Net State Service or SQL-server. In a secure 3-layer model the presentation layer (web server) does not have direct/physical access to the database layer (SQL-server). And even without the physical limitations, from an architectural standpoint you may not want this. InProc and ASP.Net State Service does not support load balancing and doesn't have fault tolerance. Therefore the only option is to access SQL through webservice middle layer (business layer).

    Read the article

  • Link failure with either abnormal memory consumption or LNK1106 in Visual Studio 2005.

    - by Corvin
    Hello, I am trying to build a solution for windows XP in Visual Studio 2005. This solution contains 81 projects (static libs, exe's, dlls) and is being successfully used by our partners. I copied the solution bundle from their repository and tried setting it up on 3 similar machines of people in our group. I was successful on two machines and the solution failed to build on my machine. The build on my machine encountered two problems: During a simple build creation of the biggest static library (about 522Mb in debug mode) would fail with the message "13libd\ui1d.lib : fatal error LNK1106: invalid file or disk full: cannot seek to 0x20101879" Full solution rebuild creates this library, however when it comes to linking the library to main .exe file, devenv.exe spawns link.exe which consumes about 80Mb of physical memory and 250MB of virtual and spawns another link.exe, which does the same. This goes on until the system runs out of memory. On PCs of my colleagues where successful build could be performed, there is only one link.exe process which uses all the memory required for linking (about 500Mb physical). There is a plenty of hard drive space on my machine and the file system is NTFS. All three of our systems are similar - Core2Quad processors, 4Gb of RAM, Windows XP SP3. We are using Visual studio installed from the same source. I tried using a different RAM and CPU, using dedicated graphics adapter to eliminate possibility of video memory sharing influencing the build, putting solution files to different location, using different versions of VS 2005 (Professional, Standard and Team Suite), changing the amount of available virtual memory, running memtest86 and building the project from scratch (i.e. a clean bundle). I have read what MSDN says about LNK1106, none of the cases apply to me except for maybe "out of heap space", however I am not sure how I should fight this. The only idea that I have left is reinstalling the OS, however I am not sure that it would help and I am not sure that my situation wouldn't repeat itself on a different machine. Would anyone have any sort of advice for me? Thanks

    Read the article

  • What is the simplest way to map a folder on the file system to a url in Tomcat?

    - by Simon
    Here's my problem... I have a small prototype app (happens to be in Grails hosted on AWS) and I want to add the ability of the user to upload a few (max 10) images. I want to persist these images on disk on the server machine, in a folder location which is outside my WAR. I realise that there is probably a super-scalable solution involving more web servers and optimised static asset serving, but for the approximately 100 users I am likely to get, it's really not worth the effort and cost. So, what is the simplest way I can have a virtual folder from my url map to a physical folder on disk? I sort of want... http://myapp.com/static to map to a folder which I can configure e.g. /var/www/static so I can then have in my code... <img src="/static/user1/picture.jpg"/> I don't particularly mind whether the resulting physical folders are directly browsable. Security will eventually be an issue, but it isn't at the start. So, what are my options? I have looked at virtual hosts on the apache site, but it feels more complicated than I need. I don't want to use the Grails static rendering plugins.

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

  • Cannot open database requested by the login. The login failed. Login failed for user

    - by Cipher
    Hi, I have copied a DB from one my computers and using it here. On trying to open the page which requires the fetching content from DB, on con.open I am getting this exception: Unable to open the physical file "E:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\DATA\cakephp.mdf". Operating system error 32: "32(The process cannot access the file because it is being used by another process.)". Unable to open the physical file "E:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\DATA\cakephp_log.LDF". Operating system error 32: "32(The process cannot access the file because it is being used by another process.)". Cannot open database "cakephp" requested by the login. The login failed. Login failed for user 'Sarin-PC\Sarin'. I have attached the database from Management Studio Express 2008 and I have also checcked the connection string. Here it is: <connectionStrings> <add name="cn" connectionString="server=.\sqlexpress;database=cakephp;integrated security=true;uid=sarin;pwd=******"/> </connectionStrings> In Visual Studio, when I test the connection, it says "Test connection succeeded". However, there is one strange thing going on. When I login to the Management Studio, there is no + sign with the newly attached database, as shown. If the full WebConfig is reqiured to be seen, I have pasted it here: http://pastebin.com/sVAuN0Ug

    Read the article

  • What are provenly scalable data persistence solutions for consumer profiles?

    - by Hubbard
    Consumer profiles with analytical scores [ConsumerID, 1..n demographical variables, 1...n analytical scores e.g. "likely to churn" "likely to buy an item 100$ in worth" etc.] have to be possible to query fast if they are to be used in customizing web-sites, consumer communications etc. Well. If you have: Large number of consumers Large profiles with a huge set of variables (as profiles describing human behaviour are likely to be..) ...you are in trouble. If you really have a physical relational database to which you target a query and then a physical disk starts to rotate someplace to give you an individual profile or a set of profiles, the profile user (a web site customizing a page, a recommendation engine making a recommendation..) has died of boredom before getting any observable results. There is the possibility of having the profiles in memory, which would of course increase the performance hugely. What are the most proven solutions for a fast-response, scalable consumer profile storage? Is there a shootout of these someplace?

    Read the article

  • SSTP client disconnects shortly after successfully connected to VPN

    - by Eran Betzalel
    I'm successfully authenticating and connecting to a SSTP VPN (on windows 2008) from my windows 7 machine, but for some reason, the connection is disconnected about a 1-2 seconds after it's established. I've done the following: Defined a SSTP VPN on my windows server 2008. Defined the same machine as CA. Issued the needed certificates and published them on the client. I'm currently testing this VPN inside my LAN so all the needed ports are opened. Here are the event log entries when trying to connect: Error Log (Client): The user HOME\User dialed a connection named Home VPN which has terminated. The reason code returned on termination is 829. Error Log (Server-VPN): The user HOME\User connected on port VPN0-0 on 7/27/2012 at 1:57 AM and disconnected on 7/27/2012 at 1:57 AM. The user was active for 0 minutes 0 seconds. 312 bytes were sent and 4528 bytes were received. The reason for disconnecting was user request. What would be the issue? How can I resolve or debug it? UPDATE: I've found an event log (Log=System, Source=RasSstp) message on the windows 7 machine that tries to connect to the VPN: The SSTP-based VPN connection to the remote access server was terminated because of a security check failure. Security settings on the remote access server do not match settings on this computer. Contact the system administrator of the remote access server and relay the following information: SHA1 Certificate Hash: 065D681...520375552F SHA256 Certificate Hash: 18DED363...EEEE28CFD00

    Read the article

  • QNTC and Windows Server 2008 R2

    - by Ben
    I am having a really hard time getting an iSeries (AS/400) machine talking to my new Windows Server 2008 R2 box using the QNTC file system on the iSeries. I had similar problems getting it to initially talk to a Windows Server 2003 machine, but enabling the local Guest account on the 2003 box solved that one. No such luck with the new 2008 box. When I do a WRKLNK /QNTC/SVR01 on the iSeries (which should show share listings, and does on any 2003 boxes) all I get is (Cannot find object to match specified name.). I know the iSeries likes the same username and password on the remote server, but unfortunately for us this is not the case. Anyhow, it does currently work with different username/password combinations on a 2003 box. To try and get the wretched things talking, I have made the 2008 server pretty open but the iSeries will not see shares on it. I have enabled the local Guest account, turned Windows firewall off, set the share permissions so Everyone has full control but to no avail. I read something on the internet about the iSeries only being able to handle NTLM authentication (and I understand by default that Server 2008 R2 only uses NTLMv2 and has NTLM disabled), so I made a special group policy for the server and tweaked all Group Policy settings under Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\Security Options but the iSeries STILL won't see it. We have a team of programmers who do all the system administration of the iSeries, but they are stumped for ideas on their side, and I'm stumped for ideas on my side. This is driving me crazy now, and if anybody has managed to get an iSeries to talk to Windows Server 2008 R2 using QNTC I would be very appreciative of any suggestions, be it on the Windows side, iSeries settings or even IBM PTF's that might patch anything. The iSeries is running V5R4 and I have *SECOFR privileges on it, if it helps. One final (most important!) note - The programmers think it's my system being tricky, and I think it's theirs - please prove me right :)

    Read the article

  • yum not working on EC2 Red Hat instance: Cannot retrieve repository metadata

    - by adev3
    For some reason yum has stopped working in my Amazon EC2 instance, located in the EU West sector. There seems to be something wrong with the path of the repo metadata, is this correct? I would be very grateful for any help, as my experience in this field is somewhat limited. Thank you very much. cat /etc/redhat-release: Red Hat Enterprise Linux Server release 6.2 (Santiago) yum repolist: Loaded plugins: amazon-id, rhui-lb, security https://rhui2-cds01.eu-west-1.aws.ce.redhat.com/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. https://rhui2-cds02.eu-west-1.aws.ce.redhat.com/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. repo id repo name status rhui-eu-west-1-client-config-server-6 Red Hat Update Infrastructure 2.0 Client Configuration Server 6 0 rhui-eu-west-1-rhel-server-releases Red Hat Enterprise Linux Server 6 (RPMs) 0 rhui-eu-west-1-rhel-server-releases-optional Red Hat Enterprise Linux Server 6 Optional (RPMs) 0 repolist: 0 yum update: (I needed to remove the base URLs below because of ServerFault's restrictions for new users) Loaded plugins: amazon-id, rhui-lb, security [same as base url 1 above]/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. [same as base url 2 above]/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: rhui-eu-west-1-client-config-server-6. Please verify its path and try again

    Read the article

  • Unable to install SQL 2008 on Windows 7

    - by Axel
    SQL 2008 install hangs on Windows 7 The story: Trying to install SQL2008 on Windows 7 hangs on SqlEngineDBStartconfigAction_install_configrc_Cpu32. What I Tried: Uninstall hangs on validation Manual uninstall using msiinv.exe and msiexec /x works Added SQL service accounts to local admins no help Turn of UAC no help Last lines in setup log: 2010-04-01 16:18:05 SQLEngine: : Checking Engine checkpoint 'GetSqlServerProcessHandle' 2010-04-01 16:18:05 SQLEngine: --SqlServerServiceSCM: Waiting for nt event 'Global\sqlserverRecComplete' to be created 2010-04-01 16:18:07 SQLEngine: --SqlServerServiceSCM: Waiting for nt event 'Global\sqlserverRecComplete' or sql process handle to be signaled 2010-04-01 16:18:07 SQLEngine: : Checking Engine checkpoint 'WaitSqlServerStartEvents' 2010-04-01 16:18:53 Slp: Sco: Attempting to initialize script 2010-04-01 16:18:53 Slp: Sco: Attempting to initialize default connection string 2010-04-01 16:18:53 Slp: Sco: Attempting to set script connection protocol to NotSpecified 2010-04-01 16:18:53 Slp: Sco: Attempting to set script connection protocol to NamedPipes 2010-04-01 16:18:53 SQLEngine: --SqlDatabaseServiceConfig: Connection String: Data Source=\\.\pipe\SQLLocal\MSSQLSERVER;Initial Catalog=master;Integrated Security=True;Pooling=False;Network Library=dbnmpntw;Application Name=SqlSetup 2010-04-01 16:18:53 SQLEngine: : Checking Engine checkpoint 'ServiceConfigConnect' 2010-04-01 16:18:53 SQLEngine: --SqlDatabaseServiceConfig: Connecting to SQL.... 2010-04-01 16:18:53 Slp: Sco: Attempting to connect script 2010-04-01 16:18:53 Slp: Connection string: Data Source=\\.\pipe\SQLLocal\MSSQLSERVER;Initial Catalog=master;Integrated Security=True;Pooling=False;Network Library=dbnmpntw;Application Name=SqlSetup And now comes the fun part: When I open conf mgr I can see the service running, I enabled named pipes and TCP/IP, restarted the service I'm able to connect to the server using an OLE DB connection but not with the Native Client. And what I find suspicious is the following error in my app log: .NET Runtime Optimization Service (clr_optimization_v2.0.50727_32) - Failed to compile: C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\Tools\VDT\DataProjects.dll . Error code = 0x8007000b In MS connect this is reported as a bug but MS is unable to reproduce the problem altough when you search the fora I'm not the only one with this problem. So any help is appreciated.

    Read the article

  • How do you import CA certificates onto an Android phone?

    - by f50driver
    Hi all, I want to connect to my University's wireless using my Nexus One. When I go to "Add Wi-Fi network" in Wireless Settings I fill in the Network SSID and select 802.1x Enterprise for the security and fill everything out. The problem is that our university's wireless uses Thawte Premium Server CA certificate for certification. When I click the drop down list for CA certificate I get nothing in the list (just N/A) Now I have the certificate (Thawte Premium Server CA.pem) and have moved it to my SD card, but it doesn't look like Android automatically detects it. Where should I put the certificate so that the Android wireless manager recognizes it. In other words, how can I import a CA certificate so that Android recognizes that it is on the phone and displays it in the CA Certificate drop down list. Thanks for any help, Tomek P.S. My phone is not rooted EDIT: After doing some research it looks like you are able to install certificates by going to your phone's settings Location & Security Install from SD card Unfortunately it looks like the only accepted file extension is .p12. It does not look like there is a way to import .cer or .pem files (which are the only two files that come with the Thawte certificates) at this moment. It does look like you can use a converter to convert your .cer or .pem files to .p12, however a key file is needed. https://www.sslshopper.com/ssl-converter.html I do not know where to get this key file for the Thawte certificates.

    Read the article

  • Event ID: 861 - The Windows Firewall has detected an application listening for incoming traffic

    - by Chris Marisic
    Firstly, my machines aren't compromised any person suggesting such will be DV'd. The security logs on some of my networks client machines (all Windows Xp Sp3) get filled with these useless error messages. Security Failure Audit Detailed Tracking Event ID: 861 User: NT AUTHORITY\NETWORK SERVICE The Windows Firewall has detected an application listening for incoming traffic. Name: - Path: C:\WINDOWS\system32\svchost.exe Process identifier: 976 User account: NETWORK SERVICE User domain: NT AUTHORITY Service: Yes RPC server: No IP version: IPv4 IP protocol: UDP Port number: 55035 Allowed: No User notified: No It's always on various random ports of UDP so setting up a port exception isn't really an option. It's always from svchost or lsass both of which are running services from DLLs. One of the most offending processes seems to the be DnsCache. I have in my global policy under AT < Network < Network Connection < Widnows Firewall < Domain Profile (I haven't changed any standard profile options do both need configured? To allow remote administration and desktop exceptions and have a custom program exception list that has %SystemRoot%\system32\svchost.exe:*:enabled:svchost (Windows won't allow you to add this exception on a local machine but it let me have it on here in the global policy it just doesn't seem to do anything) %SystemRoot%\system32\lsass.exe:*enabled:lsass (I think this one ended all of my LSASS messages) %SystemRoot%\system32\dnsrslvr.dll:*:enabled:dnscache (I tried adding the dll itself to the exception list, this didn't seem to do anything) Is there really any other options left other than disabling the Windows Firewall entirely, disabling auditing entirely or just changing the event viewer to just auto overwrite when needed? I'd much rather fix the problem and get rid of these entries ever being created instead of just trying to cover up the problem.

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >