Search Results

Search found 25123 results on 1005 pages for 'domain model'.

Page 312/1005 | < Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >

  • Configuration of Server root email - Change Address and Name on outgoing email

    - by JTWOOD
    As a newbie Postfix user, I've gotten so far and now I am stuck with a SMALL problem. I would like to configure my local network servers to send alerts and like using the following: 1) From address: [email protected] 2) From name: Hostname I can get #1 to work fine using smtp_generic_maps The problem is that on my email client, the name is listed as "root" - as in the header shows the following: Date: Sun, 29 Jul 2012 13:21:01 -0400 (EDT) From: [email protected] (root) To: undisclosed-recipients:; I'd like to change it to "From: [email protected] (Zeus)" I imagine that this can be done in the headers_check, but so far I haven't gotten anything to work and before I waste a ton of time trying to get this to work, I'd like to make sure I am on the right track. My aliasing and genericmaps are set up correctly (As far as I can see and know - the results are correct!). I just want to change that last bit in the From field to reflect the hostname. I would also like to add something in the subject of the outgoing messages for easy filtering - something like Subject: [Zeus.domain] - "Original Subject" Any suggestions are much appreciated. Thanks!

    Read the article

  • IIS6 site using integrated authentication (NTLM) fails when accessed with Win7 / IE8

    - by Ciove
    Hi, I'm having pretty similar problems as described in case 139099, but the fix there doesn't seem to work for me. Here's the details: Server: Win2003Srv R2 SP2 (stadalone, not a member of a domain). IIS6, TCP/443 (https). Anonymous access disabled. Integrated Windows authentication enabled. Local useraccouts Each useraccount has own virtual folder with change access and read access to site root. The 'adsutil NTAuthenticationProviders "NTLM"' -thing set to site root and useraccount's virtual folder. Client: Win7 Enterprise Member of a AD-Domain IE8 Allows three login attepts then fails. Using [webservername][username] in the logon window (Windows security) Logon using other browsers (Chrome and Firefox) works OK. The Web services log shows one 401.2 and two 401.1 events. The Security Event log shows two events, first is Fauilure Audit (680), The second event is Fauilure Audit (529) with these details: Logon Failure: Reason: Unknown user name or bad password User Name: [username] Domain: [webservername] Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: [MyWorkstation] Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: [999.999.999.999] Source Port: 20089 Any ideas appreciated.

    Read the article

  • Can't access site internally, but DNS works

    - by BloodyIron
    1) I have apache2 running a vhost for a website. 2) This apache2 instance is already successfuly setup for other websites on it to be accessible internally and externally. 3) I am using an internal bind9 server to resolve the new website's domain internally to the private IP. This bind9 server is not public facing, nor is it the master server on the internet. 4) The DNS internally resolves to the right IP. 5) Firefox reports "server not found". 6) I have copied the config almost identically to other configs that are known to work (adjusting for proper paths of course). In turn I have reloaded and restarted apache2 repeatedly. 7) I have an entry to forward .org .info .net alternative TLDs to .com in the vhost config for this domain, and my browser goes from .org to .com despite note #5. 8) /var/log/apache2/access.log shows when someone externally tries to access the site, but no activity is observed when someone tries to access internally. Changing the log level does not appear to improve the situation. 9) I am out of ideas, nothing appears to be wrong. Please help? To be explicit. Why is this new site unreachable internally? I would like to clarify on something, even though I have already outlined this. YES I know this system is in a private network. NO it is not going through a router. YES I am using an internal DNS server (bind9) to resolve, and YES it does resolve to the proper internal IP. YES other websites on the same server setup in the same way with internal resolution work right now and have done for a while. Everything for this domain is setup the same as the other working domains as far as I can tell. The other working domains are internally AND externally accessible. This domain I am working with is only currently externally accessible. When I go to it internally firefox tells me "Server not found".

    Read the article

  • Uninstall Glassfish and metro completely

    - by user775829
    I thought of updating my Glassfish server from 2.1 to 3.1.1 in a Linux machine. I downloaded the .ZIP package. However during uninstalling of Glassfish v2.1 I did not find the uninstall.sh file in "bin" directory. Following are a few steps which I did... I removed the glassfish folder (rm -rf ...) After removing files in the end it gave me a notification that it could not remove 2 files used by Metro. I cant recollect those file names, but I manually deleted that folder. I made a mistake by first not uninstalling Metro. I uninstalled metro completely after that. but it seemed pointless (it uninstalled successfully :P ) I transfered the Glassfish 3.1.1 ZIP file and unzipped and configured it. FOllow are a few Problems I am facing I cannot deploy any of my WAR file. Its giving errors saying " Error creating bean,Instantiation of bean failed etc etc." (However the WAR file is getting deployed successfully in other Linux Machine) When I try installing Metro v2.1 separately, it does not show the admin console or it timesout while starting the domain. The Log File of the Domain says it has started the domain successfully and the process is also created. But after running the command (asadmin) it takes like forever and times out without showing Domain Started Successfully, There is no uninstall.sh in Glassfishv3.1.1 bin directory. How do I completely uninstall Glassfish v 3.1.1 and Metro 2.1 ??? What are the files which I will have to manually remove?

    Read the article

  • How can I tell why I have access to a file share on Windows Server

    - by Joel
    I have a file share on a Windows 2008 R2 server in a AD domain (call it \SECURESERVER\STUFF) and I am not sure if I have the share and folder permissions set up right. I noticed the problem when I set up new server (WORKGROUP\FOREIGNSERVER) that was not joined to the domain and tried to copy some files off of \SECURESERVER\STUFF. I was surprised to find that when I tried to access the files, it did not prompt me for a username and password and proceeded to give me full access to the files. That worried me so I tried the same thing on some workstations that were not in the domain and they did NOT have the same behavior (they did prompt for a username/password as desired/expected). So, I think there is something peculiar about FOREIGNSERVER. I am logging into it with a local admin account, but my domain and SECURESERVER should know nothing of this server. I've carefully gone through the share and folder permissions on the share but I can't find the reason that FOREIGNSERVER has access. How can I find out why FOREIGNSERVER has access to SECURESERVER?

    Read the article

  • server_name seems to be ignored in nginx

    - by user46171
    I have two domains set up in nginx.conf. Both are using SSL with their own certificates, and proxy to Apache. However the second domain is completely ignored, and nginx always resolves to the first domain. I can't see what in the issue is with this configuration, having set the server_name in each case correctly (as far as I can see): http { include mime.types; default_type application/octet-stream; keepalive_timeout 65; upstream site { # real IP addresses masked server xx.xxx.x.xxx; server xx.xxx.x.xxx; } server { # this domain always works listen 443; server_name *.first-site.com; ssl on; ssl_certificate /var/ssl/first-site.crt; ssl_certificate_key /var/ssl/first-site.key; location / { access_log off; proxy_connect_timeout 15; proxy_next_upstream error; proxy_pass http://site; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Protocol https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } } server { # this domain is ignored, always resolves to first-site.com listen 443; server_name *.second-site.com; ssl on; ssl_certificate /var/ssl/second-site.crt; ssl_certificate_key /var/ssl/second-site.key; location / { access_log off; proxy_connect_timeout 15; proxy_next_upstream error; proxy_pass http://site; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Protocol https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } } }

    Read the article

  • Active Directory server down, recovering without reinstalling

    - by whatever
    My Windows 2003 server suddenly ceased to function as a DC (this server is the only DC of the domain). All AD related services are down. The only way I can login to the AD is physically to the machine. Everytime I access an AD-related service (e.g. "AD users and computers") I get the below error: Naming information cannot be located because: The specified directory service attribute or value does not exist. Contact your system administrator to verify that your domain is properly configured and is currently online. I found the below system event which matches the time when the issue started, this re-occurs everytime I reboot the server. NTDS General | Global Catalog | Active Directory was unable to establish a connection with the global catalog. Additional Data Error value: 1355 The specified domain either does not exist or could not be contacted. Internal ID: 3200d33 I started the troubleshooting with DNS. Netdiag throws the below error although I think this is simply a consequence of not being able to access the Global Catalog. The procedure entry point DnsGetPrimaryDomainName_UTF8 could not be located in the dynamic link library DNSAPI.dll. Anyway DNS seems OK because I can ping the DC FQDN from the DC itself. I found the below solution which is supposed to help by doing some cleanup of the metadata: http://support.microsoft.com/kb/216498 If I follow procedure 1 here is what I get at step 9: no current site Domain - DC=<mydomain>,DC=<com> no current server no current naming context I can continue the procedure until step 14. I haven't tested step 15 as my understanding is that I will have to reinstall the whole AD again. Is there any way I can recover my AD from there without having to reinstall the whole thing? Update: Yes, the server was powered off/on because reboot would take forever (not because I thought power cycling the unit would fix it more than a reboot).

    Read the article

  • DFS Root namespace is RDWR for all users

    - by Patrick
    We have an existing DFS Replication and Namespace group that we use to serve the company's files. This has been operating fine for us for some time now, and continues to do so. however a situation arose yesterday afternoon that has led us to be stumped. The problem is that we have our name space presented as : \\domain.co.uk\public\[8 or 9 folders that are mapped to the users in the business] We had a problem this morning that meant that a number of users started mapping their AD Home Drive directly to the \\domain.co.uk\public directory and we found that they had read/write. This rapidly became a problem as a at least one director saved some moderately sensitive documents in there and basically anyone could read them. I've tidied up that specific problem with some deft scripting and a slight modification of group policy. However I would like to make \public read only, the trouble is I can't work out where the ACLs for that folder would be held. All the folders that are presented as \\domain.co.uk\public\[folder] are 'real' folders on logical volumes on our DFS servers so are secured with groups that are applied via the 'security' tab. I'd like to do the same on \public but I can't find it. I have looked through amongst other things \Sysvol\domain.co.uk but can't find it and after a lot of clicking and a bit of reading I can't see how to lock it down. Any thoughts?

    Read the article

  • Why is my global security group being filtered out of my logon token?

    - by Jay Michaud
    While investigating the effects of filtered tokens on my file permissions, I noticed that one of my global security groups is being filtered in addition to the regular system-defined filtered groups. My Active Directory environment is a single-domain forest on the Windows Server 2003 functional level. I'll call the domain "mydomain.example.com". I am logged onto a Windows Server 2008 Enterprise Edition machine (not a domain controller) as a member of the "MYDOMAIN\Domain Admins" group and the "MYDOMAIN\MySecurityGroup" global security group (among others). When I run "whoami /groups" from an elevated command prompt, I see the full list of groups to which my account belongs as expected. When I run "whoami /groups" from a regular, non-elevated command prompt, I see the same list of groups, but the following groups are described as "Group used for deny only". BUILTIN\Administrators MYDOMAIN\Schema Admins MYDOMAIN\Offer Remote Assistance Helpers MYDOMAIN\MySecurityGroup Numbers 1 through 3 above are expected based on Microsoft documentation; number 4 is not. The "MYDOMAIN\MySecurityGroup" global security group is a group that I created. It contains three non-built-in global security groups, and these security groups contain only non-built-in user accounts. (That is, I created all of the accounts and groups that are members of the "MYDOMAIN\MySecurityGroup" global security group.) There are other, similar groups of which my account is a member that are not being filtered out of my logon token, and this group is not granted any specific user rights in the security settings of this computer or in Group Policy. What would cause this one group to be filtered out of my logon token?

    Read the article

  • Apache Virtual Host Issue

    - by Nik
    I think I hate Apache now, but on with the issue. It might be a configuration error on my end or just my inability to see what's right in front of me, but I'm trying to configure a sub-domain in Apache and no matter what, it always redirects the sub-domain to the web root of the main domain. My configuration is posted below (and yes, the domain name information was purposefully modified): <VirtualHost *> DocumentRoot /var/www/root/ ServerName example.com <Directory /var/www/root/> allow from all Options +Indexes </Directory> </VirtualHost> <Directory /usr/share/squirrelmail> Options Indexes FollowSymLinks <IfModule mod_php5.c> php_flag register_globals off </IfModule> <IfModule mod_dir.c> DirectoryIndex index.php </IfModule> # access to configtest is limited by default to prevent information leak <Files configtest.php> order deny,allow deny from all allow from 127.0.0.1 </Files> </Directory> # users will prefer a simple URL like http://webmail.example.com <VirtualHost *> DocumentRoot /usr/share/squirrelmail/ ServerName squirrelmail.example.com </VirtualHost>

    Read the article

  • Google MAIL not arriving - relay not allowed

    - by renevdkooi
    I have a server with sendmail, hosting my domain mind-zone.nl, i changed the MX records to point to the server. When I use Hotmail or any other client the email arrives and everything is fine. ONLY mail from GMAIL server is bounced and gmail returns "relay denied". I have set all the virtual server host settings etc, from command line I can send mails as well, hotmail works, etc. Just not gmail. The strange thing is, this is what gmail returns: Look at the lower part: "Received by" it returns some IP address which is not mine and has absolutely nothing with my domain. While when I do a NSLOOKUP and change to google's DNS server it will state that the IP Address for my domain is correctly pointing at my server. Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 554 554 5.7.1: Relay access denied (state 14). ----- Original message ----- MIME-Version: 1.0 Received: by 10.14.37.138 with SMTP id y10mr3421504eea.43.1297665573901; Sun, 13 Feb 2011 22:39:33 -0800 (PST) Received: by 10.14.29.75 with HTTP; Sun, 13 Feb 2011 22:39:33 -0800 (PST)

    Read the article

  • Remote Desktop connection to vista vs. xp

    - by CMP
    I am trying to log into my work computer remotely. I am using Windows 7 on my laptop. I have created a vpn connection to the network, and I am doing a remote desktop connection directly to the ip of my box (192.168.xxx.yyy). If I do a remote connection to a different box, running xp, it goes into remote desktop mode immediately and I see the windows login dialog as I am used to seeing. If I try remoting to my box, which is running vista, I do not see the remote desktop mode, but an additional dialog on my local machine asking for my credentials. It defaults in my local username. It allows me to log in as a different user, but the domain it has is still my local domain, not my work domain, so none of my usernames or passwords work. There doesn't appear to be a way to change the domain. Trying to hit several more boxes, it appears to act differently on xp and vista target machines. I feel like this must be a configuration issue, but I am not sure what the problem is. Any idea on how I can connect?

    Read the article

  • Providing access to a no-www website in an active directory environment

    - by oasisbob
    Our website is hosted externally, off our network. The canonical URL is a is intentionally lacking www, and will 301 redirect any requests containing www to the canonical URL. So far, so good. The problem is providing access to the website from within our LAN. In theory, the answer is simple: add a host record in DNS pointing foobarco.org to the external webhost. (eg foobarco.org -- 203.0.113.7) However, Our active directory domain is the same as our public website (foobarco.org), and AD appears to periodically auto-create host (A) records in the domain root corresponding to our domain controllers. This causes obvious problems: users on the LAN attempting to access the website resolve the domain controllers instead. As a stop-gap measure we're overriding DNS using the hosts file on clients, but this is a quick hack that doesn't scale well. The hosts-file hack hasn't broken anything obvious, so I doubt that this behavior is essential to AD operations, but I haven't found a way to disable it. Is it possible to override this behavior?

    Read the article

  • Read access to Active Directory property (uSNChanged)

    - by Tom Ligda
    I have an issue with read access to the uSNChanged property when doing LDAP searches. If I do an LDAP search with a user that is a member of the Domain Admins group (UserA), I can see the uSNChanged property for every user. The problem is that if I do an LDAP search with a user (UserB) that is not a member of the Domain Admins group, I can see the uSNChanged property for some users (UserGroupA) and not for some users (UserGroupB). When I look at the users in UserGroupA and compare them to the users in UserGroupB, I see a crucial difference in the "Security" tab. The users in UserGroupA have the "Include inheritable permissions from this object's parent" unchecked. The users in UserGroupB have that option checked. I also noticed that the users in UserGroupA are users that were created earlier. The users in UserGroupB are users created recently. It's difficult to quantify, but I estimate the border between creation time between the users in UserGroupA and UserGroupB is about 6 months ago. What can cause the user creation to default to having that security property checked as opposed to unchecked? A while back (maybe around 6 months ago?) I changed the domain functional level from Windows Server 2003 to Windows Server 2008 R2. Would that have had this effect? (I can't exactly downgrade the domain functional level to test it out.) Is this security property actually the cause of the issue with read access to the uSNChanged property on LDAP searches? It seems correlated, but I'm not sure about causation. What I want in the end is for all authenticated users to have read access to the uSNChanged property for all users when doing an LDAP search. I would also be OK if I could grant read access for that property to an AD group. Then I can control access by adding members to the group.

    Read the article

  • Bind9 not doing anything with forwarded query responses?

    - by Rykaro
    I have a Bind DNS server that is the local production DNS server and a Windows 2008 R2 domain controller which provides DNS for a lab environment with the domain xyz.lab. I've configured the Bind DNS to forward DNS requests for the domain xyz.lab to the Windows DNS server with this config: zone "xyz.lab" { type forward; forward only; forwarders { x.x.x.x; }; }; zone "x.x.x.in-addr.arpa" { type forward; forward only; forwarders { x.x.x.x; }; }; And Bind options are (the all_internal acl includes the subnets of both the production and lab networks as well as the loopback of the bind server): allow-query { all_internal; }; allow-recursion { all_internal; }; allow-transfer { none; }; notify no; minimal-responses yes; version "unknown"; Unfortunately, when I do an nslookup or dig on the bind server for a host on the lab domain, the request times out. The logs on the Windows 2008 DNS server show it receiving the query and responding to it and a network packet trace shows the query responses arriving at the Bind DNS server. The servers reside on the same switch with a router providing connectivity between the layer 3 subnets (production and lab are on different subnets) and there is a round trip time of between 3ms and 5ms on pings between the two servers, so I don't think there is an issue with latency causing a timeout of the query. In summary a query-response arrives back at the Bind server and the nslookup/dig times-out. Why does the Bind DNS not seem to be doing anything with the query responses when it receives them?

    Read the article

  • How can I ensure that my static ip address is read from /etc/network/interfaces rather than dhcp?

    - by jonderry
    This is a follow up to the following question. I'm trying to set a static IP by changing /etc/network/interfaces to the following: # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.2.133 netmask 255.255.255.0 gateway 192.168.2.1 dns-nameservers 8.8.8.8 and then running /sbin/ifdown eth0; /sbin/ifup eth0. However, the change in IP address doesn't appear to take effect without editing /etc/dhcp/dhclient.conf and commenting out the following before running ifdown; ifup: request subnet-mask, broadcast-address, time-offset, routers, domain-name, domain-name-servers, domain-search, host-name, dhcp6.name-servers, dhcp6.domain-search, netbios-name-servers, netbios-scope, interface-mtu, rfc3442-classless-static-routes, ntp-servers, dhcp6.fqdn, dhcp6.sntp-servers; Strangely, after commenting out this line, running ifdown; ifup works, but when I uncomment it, the behavior does not revert to the previous behavior of ignoring changes to my settings in /etc/network/interfaces (this doesn't seem like a problem, but I really need to be able to repeat this problem so that I can be confident that my solution is robust) Also, I'd rather not have to edit /etc/dhcp/dhclient.conf to change my static IP since it seems I should be able to do this by only editing interfaces. Can anyone explain the issues I'm seeing above and suggest the best way of making changes to static IP addresses take effect that admits reproducibility so that I can be sure that my approach works?

    Read the article

  • Permissions for Multiple User VPS

    - by adnymarc
    I have a Linode VPS server that I have recently setup and am migrating to from Mediatemple, where I have a VPS managed by Plesk. I dislike the Plesk interface and the mess it makes of a lot of things, but appreciated its ability to allow multiple people access to different domains on a server. I have most everything setup the way I would like it, but am having issues with permissions for my domain directories. I am running Ubuntu 8.04 LTS and Apache 2 as my web server. I have domains successfully located in /var/www/vhosts/domainname.com but have to modify files as root in order to add/change files for the domains. I would like to setup access with the following criteria: Each domain can have a user assigned to it (and allow for the same user to manage multiple domains - could even create symlinks in their home folder to their domains) Certain users will have shell access and may be chrooted to the domain directory they control FTP needs to be setup and able to correctly access the domains so that content editors for each domain can upload/download without permissions issues I am relatively new to linux sysadmin and have searched for a good guide to help solve these issues but haven't been able to find one yet. Thanks in advance for your help.

    Read the article

  • Folder Sharing NTFS permissions with Share Permission

    - by Muhammad Adly
    i have a problem on my domain, the history starting from when i had a server with WIN 2008 r2 installed with the following roles installed on it (AD, DNS, DHCP, File). From 1 month i decided to install a new server 2008 r2 server to get (AD, DNS, DHCP) and leave the file server on the old one. i did the following exactly: 1) robocopy all my data on external HDD 2) Install a new server with 2008 r2 3) transfer all 5 roles to transfer the domain to the new server (MainDC) 4) issue (NETLOGON, SYSVOL) not transferred but i decided to reinitialize them again an now they are operating (MainDC) 5) re-create and re-configure a new GPOs and link it to my OUs 6) reinstall Old server operating system with a fresh installation of WIN 2008 R2 (FileServer) 7) join my domain with my domain credentials. the issue when i tried to share folder on \fileserver the permissions that i had set in sharing permissions are applied on the main shared folder and subfolders. the security settings are not applied. i.e. Say i'm sharing \fileserver\MainFolder with sharing permission for Authenticated Users that can read, so every one can read this main shared folder, if i set security permission for \fileserver\MainFolder\User1 that User1 can Read\Write\Modify. User1 can not perform this processes when accessing it from Network Share, i tried alot of steps from topics online get ownership for folder, remove inheritance from parent folder, applying changes for child objects, i tried also to construct a new folder structure but also the same issue, i tried another host PC, also i get the same issue.

    Read the article

  • htaccess rewriting all subdomains to subdirectories

    - by indorock
    I'm trying to build a catch-all for any subdomains (not captured by previous rewrite rules) for a certain domain, and serve a website from a subdirectory that resides in the same folder as the .htaccess file. I already have my vhosts.conf to send all unmapped requests to a "playground" folder, where I want to easily create new subdomains by simply adding a subfolder. So, my structure looks like this: /var/www/playground |-> /foo |-> /bar The .htacces living inside the /playground folder and /foo and /bar being seperate websites. I want http://foo.domain.com to point to /foo and http://bar.domain.com to /bar. Here is my .htaccess file: RewriteEngine On RewriteCond %{HTTP_HOST} ^([^.]+).domain.com$ [NC] RewriteCond %{REQUEST_URI} !^/%1/(.*) RewriteRule ^(.*) /%1/$1 [L] This is supposed to capture the subdomain, add it as a subfolder in RewriteRule, then append after the slash and path information. The second RewriteCond is there to prevent an infinite loop. My idea was that %1 in the second RewriteCond would be able to capture the capture group in the first RewriteCond. But so far I haven't had any success, it's always ending up in a redirect loop. If I would replace %1 in the second RewriteCond with hardcoded 'foo' or 'bar', it works, which leads me to believe that you cannot refer to a capture group inside a RewriteCond. Is is true? Or am I missing something?

    Read the article

  • Provide a user with service start/stop permissions

    - by slakr007
    I have a very basic domain that I use for development. I want to create a GPO that provides users in the Backup Operators group with start/stop permissions for two specific services on a specific server. I have read several articles about this, and they all indicate that this is very easy. Create a GPO, give the user start/stop permissions to the services under Computer Configuration Policies Windows Settings Security Settings System Services, and voila. Done. Not so much, but I have to be doing something wrong. My install is pretty much the default. The domain controller is in the Domain Controllers OU, the Backup Operators group is under Builtin, and I created a user called Backup under Users. I created a GPO and linked it to the Domain Controllers OU. In the GPO I give the Backup user permission to start/stop two specific services on the server. I forced an update with gpupdate. I used Group Policy Results to verify that my GPO is the winning GPO giving the user the permission to start/stop the two services. However, the user is still unable to start/stop the services. I attempted different loopback settings on the GPO to no avail. I'm sort of at a loss here.

    Read the article

  • Enable Automatic Code First Migrations On SQL Database in Azure Web Sites

    - by Steve Michelotti
    Now that Azure supports .NET Framework 4.5, you can use all the latest and greatest available features. A common scenario is to be able to use Entity Framework Code First Migrations with a SQL Database in Azure. Prior to Code First Migrations, Entity Framework provided database initializers. While convenient for demos and prototypes, database initializers weren’t useful for much beyond that because, if you delete and re-create your entire database when the schema changes, you lose all of your operational data. This is the void that Migrations are meant to fill. For example, if you add a column to your model, Migrations will alter the database to add the column rather than blowing away the entire database and re-creating it from scratch. Azure is becoming increasingly easier to use – especially with features like Azure Web Sites. Being able to use Entity Framework Migrations in Azure makes deployment easier than ever. In this blog post, I’ll walk through enabling Automatic Code First Migrations on Azure. I’ll use the Simple Membership provider for my example. First, we’ll create a new Azure Web site called “migrationstest” including creating a new SQL Database along with it:   Next we’ll go to the web site and download the publish profile:   In the meantime, we’ve created a new MVC 4 website in Visual Studio 2012 using the “Internet Application” template. This template is automatically configured to use the Simple Membership provider. We’ll do our initial Publish to Azure by right-clicking our project and selecting “Publish…”. From the “Publish Web” dialog, we’ll import the publish profile that we downloaded in the previous step:   Once the site is published, we’ll just click the “Register” link from the default site. Since the AccountController is decorated with the [InitializeSimpleMembership] attribute, the initializer will be called and the initial database is created.   We can verify this by connecting to our SQL Database on Azure with SQL Management Studio (after making sure that our local IP address is added to the list of Allowed IP Addresses in Azure): One interesting note is that these tables got created with the default Entity Framework initializer – which is to create the database if it doesn’t already exist. However, our database did already exist! This is because there is a new feature of Entity Framework 5 where Code First will add tables to an existing database as long as the target database doesn’t contain any of the tables from the model. At this point, it’s time to enable Migrations. We’ll open the Package Manger Console and execute the command: PM> Enable-Migrations -EnableAutomaticMigrations This will enable automatic migrations for our project. Because we used the "-EnableAutomaticMigrations” switch, it will create our Configuration class with a constructor that sets the AutomaticMigrationsEnabled property set to true: 1: public Configuration() 2: { 3: AutomaticMigrationsEnabled = true; 4: } We’ll now add our initial migration: PM> Add-Migration Initial This will create a migration class call “Initial” that contains the entire model. But we need to remove all of this code because our database already exists so we are just left with empty Up() and Down() methods. 1: public partial class Initial : DbMigration 2: { 3: public override void Up() 4: { 5: } 6: 7: public override void Down() 8: { 9: } 10: } If we don’t remove this code, we’ll get an exception the first time we attempt to run migrations that tells us: “There is already an object named 'UserProfile' in the database”. This blog post by Julie Lerman fully describes this scenario (i.e., enabling migrations on an existing database). Our next step is to add the Entity Framework initializer that will automatically use Migrations to update the database to the latest version. We will add these 2 lines of code to the Application_Start of the Global.asax: 1: Database.SetInitializer(new MigrateDatabaseToLatestVersion<UsersContext, Configuration>()); 2: new UsersContext().Database.Initialize(false); Note the Initialize() call will force the initializer to run if it has not been run before. At this point, we can publish again to make sure everything is still working as we are expecting. This time we’re going to specify in our publish profile that Code First Migrations should be executed:   Once we have re-published we can once again navigate to the Register page. At this point the database has not been changed but Migrations is now enabled on our SQL Database in Azure. We can now customize our model. Let’s add 2 new properties to the UserProfile class – Email and DateOfBirth: 1: [Table("UserProfile")] 2: public class UserProfile 3: { 4: [Key] 5: [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] 6: public int UserId { get; set; } 7: public string UserName { get; set; } 8: public string Email { get; set; } 9: public DateTime DateOfBirth { get; set; } 10: } At this point all we need to do is simply re-publish. We’ll once again navigate to the Registration page and, because we had Automatic Migrations enabled, the database has been altered (*not* recreated) to add our 2 new columns. We can verify this by once again looking at SQL Management Studio:   Automatic Migrations provide a quick and easy way to keep your database in sync with your model without the worry of having to re-create your entire database and lose data. With Azure Web Sites you can set up automatic deployment with Git or TFS and automate the entire process to make it dead simple.

    Read the article

  • Install SharePoint 2013 on a two server farm

    - by sreejukg
    When SharePoint 2010 was released, I published an article on how to install SharePoint on a two server farm. You can find that article from the below link. http://weblogs.asp.net/sreejukg/archive/2010/09/28/install-sharepoint-2010-in-a-farm-environment.aspx Now it is the time for SharePoint 2013. SharePoint 2013 brings lots of improvements to the topologies, but still supports two-server architecture. Be noted that “two-server architecture” is meant for small implementations with limited service applications. Refer the below link to understand more about the SharePoint architecture http://technet.microsoft.com/en-us/sharepoint/fp123594.aspx A two tier farm consists of a database server and a web/application server as follows. In this article I am going to explain how to install SharePoint in a two server farm. I prepared 2 servers, both of them joined to a domain(SP2013Domain), and in one server I installed SQL Server 2012 (Server name: SP2013_DB). Now I am going to install SharePoint 2013 in the second server (Server Name: SP2013). The following domain accounts are created for the installation.   User Account Purpose Server roles required SQLService - SQL Server service account - This account is used as the service account for SQL Server. - domain user account / local account spSetup - You will be running SharePoint setup and SharePoint products and configuration wizard using this account. -domain user account - Member of the Administrators group on each server on which Setup is run(In our case SP2013) - SQL Server login on the computer running SQL Server - Member of the Server admin SQL Server security role spDataaccess - Configure and manage server farm. This - Application pool identity for central admin website - Microsoft SharePoint Foundation Workflow Timer Service Domain user account (Other permissions will be set to this account automatically)   The above are the minimum list of accounts needed for SharePoint 2013 installation. Now you need additional accounts for services, application pool identities for web applications etc. Refer the service accounts requirements for SharePoint from the below link. http://technet.microsoft.com/en-us/library/cc263445.aspx In order to install SharePoint 2013 login to the server using setup account(spsetup). Now run the setup from the installation media. First you need to install the pre-requisites. During the installation process, the server may restart several times. The installation wizard will guide you through the installation. In the next step, you need to agree on the terms and conditions as usual. Once you click next, the installation will start immediately. The installation wizard will let you know the progress of the installation. During the installation you may receive notifications to restart the server, you need to just click the finish button so that the system will be restarted. Once all the pre-requisites are installed, you will get the success message as below. Click finish to close the dialog. Now from the media, run the setup again and this time you choose install SharePoint server. In the next screen, you need to enter the product key, and then click continue. Now you need to agree on the terms and conditions for SharePoint 2013, and click continue. Choose the file location as per your policies and click on the install now button. You will see the installation progress. Once completed, you will see the installation completed dialog. Make sure you select the run products and configuration wizard option and click close. From the start screen, click next to start the configuration wizard. You will receive warning telling you some of the services will be stopped during the installation. Select “create new server farm” radio button and click next. In the next step, you need to enter the configuration database settings. Enter the database server details and then specify the database access account. You need to specify the farm account(spdataaccess). The wizard will grant additional privileges to the account as needed. In the next step you need to specify the passphrase, you need to note this as you need this passphrase if you add additional server to the farm. In the next step, you need to enter the central administration website port and security settings. You can choose a port or just keep it as suggested by the wizard. Click next, you will see the summary of what you have been selected. Verify the selected settings and if you want to change any, just click back and change them, or click continue to start the configuration. The configuration may take some time, you can view the progress, in case of any error, you will get the log file, you need to fix any error and again start the configuration wizard. Once the configuration successful, you will see the success message. Just click finish. Now you can browse the central administration website. It is good to check the health analyzer to review whether there are any errors/warnings. No warnings/errors indicate a good installation. Two-Server architecture is the least configuration for production environments. For small firms with less number of employees can implement SharePoint 2013 using this topology and as the workload increases, they can add more servers to the farm without reconstructing everything.

    Read the article

  • Browsing Your ADF Application Module Pooling Params with WLST

    - by Duncan Mills
    In ADF 11g you can of course use Enterprise Manager (EM) to browse and configure the settings used by ADF Business Components  Application Modules, as shown here for one of my sample deployed applications. This screen you can access from the EM homepage by pulling down the Application Deployment menu, and then ADF > Configure ADF Business Components. Then select the profile that you are actually using (Hint: look in the DataBindings.cpx file to work this out - probably the "Local" version unless you've explicitly changed it. )So, from this screen you can change the pooling parameters and the world is good. But what if you don't have EM installed? In that case you can use the WebLogic scripting capabilities to view (and Update) the MBean Properties. Explanation The pooling parameters and many others are handled through Message Driven Beans that are created for the deployed application in the server. In the case of the ADF BC pooling parameters, this MBean will combine the configuration deployed as part of the application, along with any overrides defined as -D environement commands on the JVM startup for the application server instance. Using WLST to Browse the Bean ValuesFor our purposes here I'm doing this interactively, although you can also write a script or write Java to achieve the same thing.Step 0: Before You Start You will need the followingAccess to the console on the machine that is running the serverThe WebLogic Admin username and password (I'll use weblogic/password as my example here - yours will be different)The name of the deployed application (in this example FMWdh_application1)The package path to the bc4j.xcfg file (in this example oracle.demo.fmwdh.model.service.common.bc4j.xcfg) This is based on the default path for your model project so it shoudl be fairly easy to work out.The BC configuration your AM is actually running with (look in the DataBindings.cpx for that. In this example DealHelpServiceDeployed is the profile being used..)Step 1: Start the WLST consoleTo start at the beginning, you need to run the WLST command but that needs a little setup:Change to the wlserver_10.3/server/bin directory e.g. under your Fusion Middleware Home[oracle@mymachine] cd /home/oracle/FMW_R1/wlserver_10.3/server/binSet your environment using the setWLSEnv script. e.g. on Oracle Enterprise Linux:[oracle@mymachine bin] source setWLSEnv.shStart the WLST interactive console[oracle@mymachine bin] java weblogic.WLSTInitializing WebLogic Scripting Tool (WLST) ...Welcome to WebLogic Server Administration Scripting ShellType help() for help on available commandswls:/offline> Step 2:Enter the WLST commandsConnect to the server wls:> connect('weblogic','password')Change to the Custom root, this is where the AMPooling MBeans are registered wls:> custom()Change to the b4j MBean directorywls:> cd ('oracle.bc4j.mbean.config')Work out the correct directory for the AM configuration you need. This is the difficult bit, not because it's hard to do, but because the names are long. The structure here is such that every child MBean is displayed at the same level as the parent, so for each deployed application there will be many directories shown. In fact, do an ls() command here and you'll see what I mean. Each application will have one MBean for the app as a whole, and then for each deployed configuration in the .xcfg file you'll see: One for the config entry itself, and then one each for Security, DB Connection and AM Pooling. So if you deploy an app with just one configuration you'll see 5 directories, if it has two configurations in the .xcfg you'll see 9 and so on.The directory you are looking for will contain those bits of information you gathered in Step 0, specifically the Application Name, the configuration you are using and the xcfg name: First of all narrow your list to just those directories returned from the ls() command that begin oracle.bc4j.mbean.config:name=AMPool. These identify the AM pooling MBeans for all the deployed applications. Now look for the correct application name e.g. Application=FMWdh_application1The config setting in that sub-list should already be correct and match what you expect e.g. oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfgFinally look for the correct value for the AppModuleConfigType e.g. oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployedNow you have identified the correct directory name, change to that (keep the name on one line of course - I've had to split it across lines here for clarity:wls:> cd ('oracle.bc4j.mbean.config:name=AMPool,     type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,    oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,    Application=FMWdh_application1,    oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed') Now you can actually view the parameter values with a simple ls() commandwls:> ls()And here's the output in which you can view the realtime values of the various pool settings: -rw- AmpoolConnectionstrategyclass oracle.jbo.common.ampool.DefaultConnectionStrategy -rw- AmpoolDoampooling true -rw- AmpoolDynamicjdbccredentials false -rw- AmpoolInitpoolsize 2 -rw- AmpoolIsuseexclusive true -rw- AmpoolMaxavailablesize 40 -rw- AmpoolMaxinactiveage 600000 -rw- AmpoolMaxpoolsize 4096 -rw- AmpoolMinavailablesize 2 -rw- AmpoolMonitorsleepinterval 600000 -rw- AmpoolResetnontransactionalstate true -rw- AmpoolSessioncookiefactoryclass oracle.jbo.common.ampool.DefaultSessionCookieFactory -rw- AmpoolTimetolive 3600000 -rw- AmpoolWritecookietoclient false -r-- ConfigMBean true -rw- ConnectionPoolManager oracle.jbo.server.ConnectionPoolManagerImpl -rw- Doconnectionpooling false -rw- Dofailover false -rw- Initpoolsize 0 -rw- Maxpoolcookieage -1 -rw- Maxpoolsize 4096 -rw- Poolmaxavailablesize 25 -rw- Poolmaxinactiveage 600000 -rw- Poolminavailablesize 5 -rw- Poolmonitorsleepinterval 600000 -rw- Poolrequesttimeout 30000 -rw- Pooltimetolive -1 -r-- ReadOnly false -rw- Recyclethreshold 10 -r-- RestartNeeded false -r-- SystemMBean false -r-- eventProvider true -r-- eventTypes java.lang.String[jmx.attribute.change] -r-- objectName oracle.bc4j.mbean.config:name=AMPool,type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,Application=FMWdh_application1,oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed -rw- poolClassName oracle.jbo.common.ampool.ApplicationPoolImpl Thanks to Brian Fry on the JDeveloper PM Team who did most of the work to put this sequence of steps together with me badgering him over his shoulder.

    Read the article

  • ASP.NET MVC 3 Hosting :: ASP.NET MVC 3 First Look

    - by mbridge
    MVC 3 View Enhancements MVC 3 introduces two improvements to the MVC view engine: - Ability to select the view engine to use. MVC 3 allows you to select from any of your  installed view engines from Visual Studio by selecting Add > View (including the newly introduced ASP.NET “Razor” engine”): - Support for the next ASP.NET “Razor” syntax. The newly previewed Razor syntax is a concise lightweight syntax. MVC 3 Control Enhancements - Global Filters: ASP.NET MVC 3  allows you to specify that a filter which applies globally to all Controllers within an app by adding it to the GlobalFilters collection.  The RegisterGlobalFilters() method is now included in the default Global.asax class template and so provides a convenient place to do this since is will then be called by the Application_Start() method: void RegisterGlobalFilters(GlobalFilterCollection filters) { filters.Add(new HandleLoggingAttribute()); filters.Add(new HandleErrorAttribute()); } void Application_Start() { RegisterGlobalFilters (GlobalFilters.Filters); } - Dynamic ViewModel Property : MVC 3 augments the ViewData API with a new “ViewModel” property on Controller which is of type “dynamic” – and therefore enables you to use the new dynamic language support in C# and VB pass ViewData items using a cleaner syntax than the current dictionary API. Public ActionResult Index() { ViewModel.Message = "Hello World"; return View(); } - New ActionResult Types : MVC 3 includes three new ActionResult types and helper methods: 1. HttpNotFoundResult – indicates that a resource which was requested by the current URL was not found. HttpNotFoundResult will return a 404 HTTP status code to the calling client. 2. PermanentRedirects – The HttpRedirectResult class contains a new Boolean “Permanent” property which is used to indicate that a permanent redirect should be done. Permanent redirects use a HTTP 301 status code.  The Controller class  includes three new methods for performing these permanent redirects: RedirectPermanent(), RedirectToRoutePermanent(), andRedirectToActionPermanent(). All  of these methods will return an instance of the HttpRedirectResult object with the Permanent property set to true. 3. HttpStatusCodeResult – used for setting an explicit response status code and its associated description. MVC 3 AJAX and JavaScript Enhancements MVC 3 ships with built-in JSON binding support which enables action methods to receive JSON-encoded data and then model-bind it to action method parameters. For example a jQuery client-side JavaScript could define a “save” event handler which will be invoked when the save button is clicked on the client. The code in the event handler then constructs a client-side JavaScript “product” object with 3 fields with their values retrieved from HTML input elements. Finally, it uses jQuery’s .ajax() method to POST a JSON based request which contains the product to a /theStore/UpdateProduct URL on the server: $('#save').click(function () { var product = { ProdName: $('#Name').val() Price: $('#Price').val(), } $.ajax({ url: '/theStore/UpdateProduct', type: "POST"; data: JSON.stringify(widget), datatype: "json", contentType: "application/json; charset=utf-8", success: function () { $('#message').html('Saved').fadeIn(), }, error: function () { $('#message').html('Error').fadeIn(), } }); return false; }); MVC will allow you to implement the /theStore/UpdateProduct URL on the server by using an action method as below. The UpdateProduct() action method will accept a strongly-typed Product object for a parameter. MVC 3 can now automatically bind an incoming JSON post value to the .NET Product type on the server without having to write any custom binding. [HttpPost] public ActionResult UpdateProduct(Product product) { // save logic here return null } MVC 3 Model Validation Enhancements MVC 3 builds on the MVC 2 model validation improvements by adding   support for several of the new validation features within the System.ComponentModel.DataAnnotations namespace in .NET 4.0: - Support for the new DataAnnotations metadata attributes like DisplayAttribute. - Support for the improvements made to the ValidationAttribute class which now supports a new IsValid overload that provides more info on  the current validation context, like what object is being validated. - Support for the new IValidatableObject interface which enables you to perform model-level validation and also provide validation error messages which are specific to the state of the overall model. MVC 3 Dependency Injection Enhancements MVC 3 includes better support for applying Dependency Injection (DI) and also integrating with Dependency Injection/IOC containers. Currently MVC 3 Preview 1 has support for DI in the below places: - Controllers (registering & injecting controller factories and injecting controllers) - Views (registering & injecting view engines, also for injecting dependencies into view pages) - Action Filters (locating and  injecting filters) And this is another important blog about Microsoft .NET and technology: - Windows 2008 Blog - SharePoint 2010 Blog - .NET 4 Blog And you can visit here if you're looking for ASP.NET MVC 3 hosting

    Read the article

  • ASSIMP in my program is much slower to import than ASSIMP view program

    - by Marco
    The problem is really simple: if I try to load with the function aiImportFileExWithProperties a big model in my software (around 200.000 vertices), it takes more than one minute. If I try to load the very same model with ASSIMP view, it takes 2 seconds. For this comparison, both my software and Assimp view are using the dll version of the library at 64 bit, compiled by myself (Assimp64.dll). This is the relevant piece of code in my software // default pp steps unsigned int ppsteps = aiProcess_CalcTangentSpace | // calculate tangents and bitangents if possible aiProcess_JoinIdenticalVertices | // join identical vertices/ optimize indexing aiProcess_ValidateDataStructure | // perform a full validation of the loader's output aiProcess_ImproveCacheLocality | // improve the cache locality of the output vertices aiProcess_RemoveRedundantMaterials | // remove redundant materials aiProcess_FindDegenerates | // remove degenerated polygons from the import aiProcess_FindInvalidData | // detect invalid model data, such as invalid normal vectors aiProcess_GenUVCoords | // convert spherical, cylindrical, box and planar mapping to proper UVs aiProcess_TransformUVCoords | // preprocess UV transformations (scaling, translation ...) aiProcess_FindInstances | // search for instanced meshes and remove them by references to one master aiProcess_LimitBoneWeights | // limit bone weights to 4 per vertex aiProcess_OptimizeMeshes | // join small meshes, if possible; aiProcess_SplitByBoneCount | // split meshes with too many bones. Necessary for our (limited) hardware skinning shader 0; cout << "Loading " << pFile << "... "; aiPropertyStore* props = aiCreatePropertyStore(); aiSetImportPropertyInteger(props,AI_CONFIG_IMPORT_TER_MAKE_UVS,1); aiSetImportPropertyFloat(props,AI_CONFIG_PP_GSN_MAX_SMOOTHING_ANGLE,80.f); aiSetImportPropertyInteger(props,AI_CONFIG_PP_SBP_REMOVE, aiPrimitiveType_LINE | aiPrimitiveType_POINT); aiSetImportPropertyInteger(props,AI_CONFIG_GLOB_MEASURE_TIME,1); //aiSetImportPropertyInteger(props,AI_CONFIG_PP_PTV_KEEP_HIERARCHY,1); // Call ASSIMPs C-API to load the file scene = (aiScene*)aiImportFileExWithProperties(pFile.c_str(), ppsteps | /* default pp steps */ aiProcess_GenSmoothNormals | // generate smooth normal vectors if not existing aiProcess_SplitLargeMeshes | // split large, unrenderable meshes into submeshes aiProcess_Triangulate | // triangulate polygons with more than 3 edges //aiProcess_ConvertToLeftHanded | // convert everything to D3D left handed space aiProcess_SortByPType | // make 'clean' meshes which consist of a single typ of primitives 0, NULL, props); aiReleasePropertyStore(props); if(!scene){ cout << aiGetErrorString() << endl; return 0; } this is the relevant piece of code in assimp view code // default pp steps unsigned int ppsteps = aiProcess_CalcTangentSpace | // calculate tangents and bitangents if possible aiProcess_JoinIdenticalVertices | // join identical vertices/ optimize indexing aiProcess_ValidateDataStructure | // perform a full validation of the loader's output aiProcess_ImproveCacheLocality | // improve the cache locality of the output vertices aiProcess_RemoveRedundantMaterials | // remove redundant materials aiProcess_FindDegenerates | // remove degenerated polygons from the import aiProcess_FindInvalidData | // detect invalid model data, such as invalid normal vectors aiProcess_GenUVCoords | // convert spherical, cylindrical, box and planar mapping to proper UVs aiProcess_TransformUVCoords | // preprocess UV transformations (scaling, translation ...) aiProcess_FindInstances | // search for instanced meshes and remove them by references to one master aiProcess_LimitBoneWeights | // limit bone weights to 4 per vertex aiProcess_OptimizeMeshes | // join small meshes, if possible; aiProcess_SplitByBoneCount | // split meshes with too many bones. Necessary for our (limited) hardware skinning shader 0; aiPropertyStore* props = aiCreatePropertyStore(); aiSetImportPropertyInteger(props,AI_CONFIG_IMPORT_TER_MAKE_UVS,1); aiSetImportPropertyFloat(props,AI_CONFIG_PP_GSN_MAX_SMOOTHING_ANGLE,g_smoothAngle); aiSetImportPropertyInteger(props,AI_CONFIG_PP_SBP_REMOVE,nopointslines ? aiPrimitiveType_LINE | aiPrimitiveType_POINT : 0 ); aiSetImportPropertyInteger(props,AI_CONFIG_GLOB_MEASURE_TIME,1); //aiSetImportPropertyInteger(props,AI_CONFIG_PP_PTV_KEEP_HIERARCHY,1); // Call ASSIMPs C-API to load the file g_pcAsset->pcScene = (aiScene*)aiImportFileExWithProperties(g_szFileName, ppsteps | /* configurable pp steps */ aiProcess_GenSmoothNormals | // generate smooth normal vectors if not existing aiProcess_SplitLargeMeshes | // split large, unrenderable meshes into submeshes aiProcess_Triangulate | // triangulate polygons with more than 3 edges aiProcess_ConvertToLeftHanded | // convert everything to D3D left handed space aiProcess_SortByPType | // make 'clean' meshes which consist of a single typ of primitives 0, NULL, props); aiReleasePropertyStore(props); As you can see the code is nearly identical because I copied from assimp view. What could be the reason for such a difference in performance? The two software are using the same dll Assimp64.dll (compiled in my computer with vc++ 2010 express) and the same function aiImportFileExWithProperties to load the model, so I assume that the actual code employed is the same. How is it possible that the function aiImportFileExWithProperties is 100 times slower when called by my sotware than when called by assimp view? What am I missing? I am not good with dll, dynamic and static libraries so I might be missing something obvious. ------------------------------ UPDATE I found out the reason why the code is going slower. Basically I was running my software with "Start debugging" in VC++ 2010 Express. If I run the code outside VC++ 2010 I get same performance of assimp view. However now I have a new question. Why does the dll perform slower in VC++ debugging? I compiled it in release mode without debugging information. Is there any way to have the dll go fast in debugmode i.e. not debugging the dll? Because I am interested in debugging only my own code, not the dll that I assume is already working fine. I do not want to wait 2 minutes every time I want to load my software to debug. Does this request make sense?

    Read the article

< Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >