Search Results

Search found 19703 results on 789 pages for 'virtual ip'.

Page 409/789 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • Security Access Control With Solaris Virtualization

    - by Thierry Manfe-Oracle
    Numerous Solaris customers consolidate multiple applications or servers on a single platform. The resulting configuration consists of many environments hosted on a single infrastructure and security constraints sometimes exist between these environments. Recently, a customer consolidated many virtual machines belonging to both their Intranet and Extranet on a pair of SPARC Solaris servers interconnected through Infiniband. Virtual Machines were mapped to Solaris Zones and one security constraint was to prevent SSH connections between the Intranet and the Extranet. This case study gives us the opportunity to understand how the Oracle Solaris Network Virtualization Technology —a.k.a. Project Crossbow— can be used to control outbound traffic from Solaris Zones. Solaris Zones from both the Intranet and Extranet use an Infiniband network to access a ZFS Storage Appliance that exports NFS shares. Solaris global zones on both SPARC servers mount iSCSI LU exported by the Storage Appliance.  Non-global zones are installed on these iSCSI LU. With no security hardening, if an Extranet zone gets compromised, the attacker could try to use the Storage Appliance as a gateway to the Intranet zones, or even worse, to the global zones as all the zones are reachable from this node. One solution consists in using Solaris Network Virtualization Technology to stop outbound SSH traffic from the Solaris Zones. The virtualized network stack provides per-network link flows. A flow classifies network traffic on a specific link. As an example, on the network link used by a Solaris Zone to connect to the Infiniband, a flow can be created for TCP traffic on port 22, thereby a flow for the ssh traffic. A bandwidth can be specified for that flow and, if set to zero, the traffic is blocked. Last but not least, flows are created from the global zone, which means that even with root privileges in a Solaris zone an attacker cannot disable or delete a flow. With the flow approach, the outbound traffic of a Solaris zone is controlled from outside the zone. Schema 1 describes the new network setting once the security has been put in place. Here are the instructions to create a Crossbow flow as used in Schema 1 : (GZ)# zoneadm -z zonename halt ...halts the Solaris Zone. (GZ)# flowadm add-flow -l iblink -a transport=TCP,remote_port=22 -p maxbw=0 sshFilter  ...creates a flow on the IB partition "iblink" used by the zone to connect to the Infiniband.  This IB partition can be identified by intersecting the output of the commands 'zonecfg -z zonename info net' and 'dladm show-part'.  The flow is created on port 22, for the TCP traffic with a zero maximum bandwidth.  The name given to the flow is "sshFilter". (GZ)# zoneadm -z zonename boot  ...restarts the Solaris zone now that the flow is in place.Solaris Zones and Solaris Network Virtualization enable SSH access control on Infiniband (and on Ethernet) without the extra cost of a firewall. With this approach, no change is required on the Infiniband switch. All the security enforcements are put in place at the Solaris level, minimizing the impact on the overall infrastructure. The Crossbow flows come in addition to many other security controls available with Oracle Solaris such as IPFilter and Role Based Access Control, and that can be used to tackle security challenges.

    Read the article

  • Ask the Readers: How Many Monitors Do You Use with Your Computer?

    - by Asian Angel
    Most people have a single monitor for their computers, many have two, and some individuals enjoy “3 monitor plus” goodness. This week we would like to know how many monitors you use with your computer. Photo by DamnedNice. A good majority of people have a single monitor that they use with their computers and that single monitor serves their needs very well. It could be that these individuals do not engage in a heavy amount of work or play on their computers…they just need to do the basics like checking e-mail, using I.M., working with photos, etc. Another possibility is the use of virtual desktop software such as Dexpot, Yodm 3D, or Sysinternals Desktops on Windows systems. Linux systems such as Ubuntu already have that wonderful multi-desktop functionality built in. The wonderful part about virtual desktops is that a single monitor can feel equivalent to a small army of monitors. The ability to separate your open windows into “categories” and spread them out across multiple desktops is definitely nice. With each passing year dual monitor setups are becoming more common. Having twice the screen real-estate visible at the same time can be extremely convenient when you are multi-tasking. Perhaps you like to monitor your system’s stats and an e-mail account on the second monitor while working with software on the first. It certainly beats having windows popping up and down on your screen constantly while keeping on top of everything! Next we have the people who have three or more monitors in use with their computers. This may be a result of the type of work they do, an experiment to see if multiple monitors are right for them, or the cool, geeky factor that comes with having all those monitors. Needless to say these individuals can induce a good amount of envy and/or inspiration in the rest of us when we see their awesome setups. Are you perfectly content with a single monitor? Do you have two or more monitors that you use? If you have two or more monitors are they actually that useful to you? Perhaps you are getting ready even now to add additional monitors to your system. Whatever your situation may be at the moment, let us know your thoughts (and possible multi-monitor plans) in the comments! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know HTG Explains: Which Linux File System Should You Choose? HTG Explains: Why Does Photo Paper Improve Print Quality? Hidden Tracks Your Stolen Mac; Free Until End of January Why the Other Checkout Line Always Moves Faster World of Warcraft Theme for Windows 7 Ubuntu Font Family Now Available for Download Oh No! WikiLeaks Published Santa Claus’s Naughty List [Video] Remember the Milk Now Supports HTTPS Encryption for the Entire Session

    Read the article

  • "The connection has timed out" - Please help!

    - by gon
    I recently installed a fresh Ubuntu 12.04 LTS on a desktop, and the installation itself was successful (other than 'grub rescue' issue that I encountered but fixed) but this connection problem is really giving me a headache. Symptoms: 1. When I open the FireFox browser and try to connect to a website, it just hangs for a while saying "Connecting..." but eventually loads an error page "The connection has timed out". 2. It's not a browser problem (and I tried setting ipv6 thing to "true" at about:config) because running "sudo apt-get install [some-random-package]" at terminal fails ("E: Unable to locate package [package]") too. All other operations that need internet access are not working. 3. I certainly see a wired network (called "eth1") at the Network Manager, and it says "Connection Established" after disconnecting and then connecting again. I have tried almost everything that could be found from google search results still no luck. Their problems slightly differ from mine or the solutions just don't work. By the way it didn't have internet access when installing Ubuntu 12.04. (I ignored the message that I need internet to install Ubuntu) Could this be a problem? I'm sorry I don't remember if internet worked or not on the previous version of Ubuntu. :( I would really appreciate your help... I don't even know what more to do if this fails too.. Thanks!! Thanks for your comment. Here is the result of ifconfig: eth0 Link encap:Ethernet HWaddr 78:ac:c0:3d:b2:b9 inet addr:10.10.65.185 Bcast:10.10.65.255 Mask:255.255.255.0 inet6 addr: fe80::7aac:c0ff:fe3d:b2b9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3907 errors:0 dropped:0 overruns:0 frame:0 TX packets:771 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:393118 (393.1 KB) TX bytes:73472 (73.4 KB) Interrupt:16 eth1 Link encap:Ethernet HWaddr 78:ac:c0:3d:b2:b8 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:204 (204.0 B) TX bytes:204 (204.0 B) route -n: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.10.65.1 0.0.0.0 UG 0 0 0 eth0 10.10.65.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 /etc/resolv.conf: # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 8.8.8.8 nameserver 8.8.4.4 nameserver 10.81.1.8 nameserver 10.1.2.10 nameserver 127.0.0.1 search yamatake.local /etc/network/interfaces: auto lo iface lo inet loopback #auto eth0 #iface eth0 inet dhcp #auto eth1 #iface eth1 inet dhcp And I'll also include the result of 'sudo lshw -C network' in case it might help: *-network description: Ethernet interface product: NetXtreme BCM5764M Gigabit Ethernet PCIe vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 10 serial: 78:ac:c0:3d:b2:b9 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=5764m-v3.35 ip=10.10.65.185 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:93 memory:fc000000-fc00ffff *-network description: Ethernet interface product: NetXtreme BCM5764M Gigabit Ethernet PCIe vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:01:00.0 logical name: eth1 version: 10 serial: 78:ac:c0:3d:b2:b8 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=5764m-v3.35 latency=0 link=no multicast=yes port=twisted pair speed=100Mbit/s resources: irq:94 memory:fb000000-fb00ffff

    Read the article

  • Obfuscation is not a panacea

    - by simonc
    So, you want to obfuscate your .NET application. My question to you is: Why? What are your aims when your obfuscate your application? To protect your IP & algorithms? Prevent crackers from breaking your licensing? Your boss says you need to? To give you a warm fuzzy feeling inside? Obfuscating code correctly can be tricky, it can break your app if applied incorrectly, it can cause problems down the line. Let me be clear - there are some very good reasons why you would want to obfuscate your .NET application. However, you shouldn't be obfuscating for the sake of obfuscating. Security through Obfuscation? Once your application has been installed on a user’s computer, you no longer control it. If they do not want to pay for your application, then nothing can stop them from cracking it, even if the time cost to them is much greater than the cost of actually paying for it. Some people will not pay for software, even if it takes them a month to crack a $30 app. And once it is cracked, there is nothing stopping them from putting the result up on the internet. There should be nothing suprising about this; there is no software protection available for general-purpose computers that cannot be cracked by a sufficiently determined attacker. Only by completely controlling the entire stack – software, hardware, and the internet connection, can you have even a chance to be uncrackable. And even then, someone somewhere will still have a go, and probably succeed. Even high-end cryptoprocessors have known vulnerabilities that can be exploited by someone with a scanning electron microscope and lots of free time. So, then, why use obfuscation? Well, the primary reason is to protect your IP. What obfuscation is very good at is hiding the overall structure of your program, so that it’s very hard to figure out what exactly the code is doing at any one time, what context it is running in, and how it fits in with the rest of the application; all of which you need to do to understand how the application operates. This is completely different to cracking an application, where you simply have to find a single toggle that determines whether the application is licensed or not, and flip it without the rest of the application noticing. However, again, there are limitations. An obfuscated application still has to run in the same way, and do the same thing, as the original unobfuscated application. This means that some of the protections applied to the obfuscated assembly have to be undone at runtime, else it would not run on the CLR and do the same thing. And, again, since we don’t control the environment the application is run on, there is nothing stopping a user from undoing those protections manually, and reversing some of the obfuscation. It’s a perpetual arms race, and it always will be. We have plenty of ideas lined about new protections, and the new protections added in SA 6.6 (method parent obfuscation and a new control flow obfuscation level) are specifically designed to be harder to reverse and reconstruct the original structure. So then, by all means, obfuscate your application if you want to protect the algorithms and what the application does. That’s what SmartAssembly is designed to do. But make sure you are clear what a .NET obfuscator can and cannot protect you against, and don’t expect your obfuscated application to be uncrackable. Someone, somewhere, will crack your application if they want to and they don’t have anything better to do with their time. The best we can do is dissuade the casual crackers and make it much more difficult for the serious ones. Cross posted from Simple Talk.

    Read the article

  • Using Private Extension Galleries in Visual Studio 2012

    - by Jakob Ehn
    Note: The installer and the complete source code is available over at CodePlex at the following location: http://inmetavsgallery.codeplex.com   Extensions and addins are everywhere in the Visual Studio ALM ecosystem! Microsoft releases new cool features in the form of extensions and the list of 3rd party extensions that plug into Visual Studio just keeps growing. One of the nice things about the VSIX extensions is how they are deployed. Microsoft hosts a public Visual Studio Gallery where you can upload extensions and make them available to the rest of the community. Visual Studio checks for updates to the installed extensions when you start Visual Studio, and installing/updating the extensions is fast since it is only a matter of extracting the files within the VSIX package to the local extension folder. But for custom, enterprise-specific extensions, you don’t want to publish them online to the whole world, but you still want an easy way to distribute them to your developers and partners. This is where Private Extension Galleries come into play. In Visual Studio 2012, it is now possible to add custom extensions galleries that can point to any URL, as long as that URL returns the expected content of course (see below).Registering a new gallery in Visual Studio is easy, but there is very little documentation on how to actually host the gallery. Visual Studio galleries uses Atom Feed XML as the protocol for delivering new and updated versions of the extensions. This MSDN page describes how to create a static XML file that returns the information about your extensions. This approach works, but require manual updates of that file every time you want to deploy an update of the extension. Wouldn’t it be nice with a web service that takes care of this for you, that just lets you drop a new version of your VSIX file and have it automatically detect the new version and produce the correct Atom Feed XML? Well search no more, this is exactly what the Inmeta Visual Studio Gallery Service does for you :-) Here you can see that in addition to the standard Online galleries there is an Inmeta Gallery that contains two extensions (our WIX templates and our custom TFS Checkin Policies). These can be installed/updated i the same way as extensions from the public Visual Studio Gallery. Installing the Service Download the installler (Inmeta.VSGalleryService.Install.msi) for the service and run it. The installation is straight forward, just select web site, application pool and (optional) a virtual directory where you want to install the service.   Note: If you want to run it in the web site root, just leave the application name blank Press Next and finish the installer. Open web.config in a text editor and locate the the <applicationSettings> element Edit the following setting values: FeedTitle This is the name that is shown if you browse to the service using a browser. Not used by Visual Studio BaseURI When Visual Studio downloads the extension, it will be given this URI + the name of the extension that you selected. This value should be on the following format: http://SERVER/[VDIR]/gallery/extension/ VSIXAbsolutePath This is the path where you will deploy your extensions. This can be a local folder or a remote share. You just need to make sure that the application pool identity account has read permissions in this folder Save web.config to finish the installation Open a browser and enter the URL to the service. It should show an empty Feed page:   Adding the Private Gallery in Visual Studio 2012 Now you need to add the gallery in Visual Studio. This is very easy and is done as follows: Go to Tools –> Options and select Environment –> Extensions and Updates Press Add to add a new gallery Enter a descriptive name, and add the URL that points to the web site/virtual directory where you installed the service in the previous step   Press OK to save the settings. Deploying an Extension This one is easy: Just drop the file in the designated folder! :-)  If it is a new version of an existing extension, the developers will be notified in the same way as for extensions from the public Visual Studio gallery: I hope that you will find this sever useful, please contact me if you have questions or suggestions for improvements!

    Read the article

  • DirectX particle system. ConstantBuffer

    - by Liuka
    I'm new in DirectX and I'm making a 2D game. I want to use a particle system to simulate a 3D starfield, so each star has to set its own constant buffer for the vertexshader es. to set it's world matrix. So if i have 500 stars (that move every frame) i need to call 500 times VSsetconstantbuffer, and map/unmap each buffer. with 500 stars i have an average of 220 fps and that's quite good. My bottelneck is Vs/PsSetconstantbuffer. If i dont call this function i have 400 fps(obliviously nothing is display, since i dont set the position of the stars). So is there a method to speed up the render of the particle system?? Ps. I'm using intel integrate graphic (hd 2000-3000). with a nvidia (or amd) gpu will i have the same bottleneck?? If, for example, i dont call setshaderresource i have 10-20 fps more (for 500 objcets), that is not 180.Why does SetConstantBuffer take so long?? LPVOID VSdataPtr = VSmappedResource.pData; memcpy(VSdataPtr, VSdata, CszVSdata); context->Unmap(VertexBuffer, 0); result = context->Map(PixelBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &PSmappedResource); if (FAILED(result)) { outputResult.OutputErrorMessage(TITLE, L"Cannot map the PixelBuffer", &result, OUTPUT_ERROR_FILE); return; } LPVOID PSdataPtr = PSmappedResource.pData; memcpy(PSdataPtr, PSdata, CszPSdata); context->Unmap(PixelBuffer, 0); context->VSSetConstantBuffers(0, 1, &VertexBuffer); context->PSSetConstantBuffers(0, 1, &PixelBuffer); this update and set the buffer. It's part of the render method of a sprite class that contains a the vertex buffer and the texture to apply to the quads(it's a 2d game) too. I have an array of 500 stars (sprite setup with a star texture). Every frame: clear back buffer; draw the array of stars; present the backbuffer; draw also call the function update( which calculate the position of the sprite on screen based on a "camera class") Ok, create a vertex buffer with the vertices of each quads(stars) seems to be good, since the stars don't change their "virtual" position; so.... In a particle system (where particles move) it's better to have all the object in only one vertices array, rather then an array of different sprite/object in order to update all the vertices' position with a single setbuffer call. In this case i have to use a dynamic vertex buffer with the vertices positions like this: verticesForQuad={{ XMFLOAT3((float)halfDImensions.x-1+pos.x, (float)halfDImensions.y-1+pos.y, 1.0f), XMFLOAT2(1.0f, 0.0f) }, { XMFLOAT3((float)halfDImensions.x-1+pos.x, -(float)halfDImensions.y-1+pos.y, 1.0f), XMFLOAT2(1.0f, 1.0f) }, { XMFLOAT3(-(float)halfDImensions.x-1+pos.x, (float)halfDImensions.y-1.pos.y, 1.0f), XMFLOAT2(0.0f, 0.0f) }, { XMFLOAT3(-(float)halfDImensions.x-1.pos.x, -(float)halfDImensions.y-1+pos.y, 1.0f), XMFLOAT2(0.0f, 1.0f) }, ....other quads} where halfDimensions is the halfsize in pixel of a texture and pos the virtual position of a star. than create an array of verticesForQuad and create the vertex buffer ZeroMemory(&vertexDesc, sizeof(vertexDesc)); vertexDesc.Usage = D3D11_USAGE_DEFAULT; vertexDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexDesc.ByteWidth = sizeof(VertexType)* 4*numStars; ZeroMemory(&resourceData, sizeof(resourceData)); resourceData.pSysMem = verticesForQuad; result = device->CreateBuffer(&vertexDesc, &resourceData, &CvertexBuffer); and call each frame Context->IASetVertexBuffers(0, 1, &CvertexBuffer, &stride, &offset); But if i want to add and remove obj i have to recreate the buffer each time, havent i?? There is a faster way? I think i can create a vertex buffer with a max size (es. 10000 objs) and when i update it set only the 250 position (for 250 onjs for example) and pass this number as the vertexCount to the draw function (numObjs*4), or i'm worng

    Read the article

  • Ubuntu 12.10 no network and no graphics

    - by khasiKoMasu
    I recently upgraded Ubuntu 12.04 to 12.10 only to find out that it won't connect to any network, neither wired nor wireless and the graphics is messed up too as in a low screen resolution. For 12.04, my system was running perfectly. I don't know why upgrade messed it up so bad. Reinstalling the OS is an issue because I have set up a lot of development environments that I cannot afford to set it up again. Some of the outputs: lspci -nn | grep 0200 02:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 02) nm-tool NetworkManager Tool State: disconnected cat /etc/network/interfaces auto lo iface lo inet loopback sudo cat /var/log/syslog | grep etwork | tail -n20 Nov 2 13:50:22 Cobalt NetworkManager[978]: SCPlugin-Ifupdown: (-1240454760) ... get_connections (managed=false): return empty list. Nov 2 13:50:22 Cobalt NetworkManager[978]: Ifupdown: get unmanaged devices count: 0 Nov 2 13:50:22 Cobalt bluetoothd[1016]: Failed to init network plugin Nov 2 13:50:22 Cobalt NetworkManager[978]: <info> modem-manager is now available Nov 2 13:50:22 Cobalt NetworkManager[978]: <info> monitoring kernel firmware directory '/lib/firmware'. Nov 2 13:50:22 Cobalt NetworkManager[978]: <info> WiFi enabled by radio killswitch; enabled by state file Nov 2 13:50:22 Cobalt NetworkManager[978]: <info> WWAN enabled by radio killswitch; enabled by state file Nov 2 13:50:22 Cobalt NetworkManager[978]: <info> WiMAX enabled by radio killswitch; enabled by state file Nov 2 13:50:22 Cobalt NetworkManager[978]: <info> Networking is enabled by state file Nov 2 13:50:22 Cobalt NetworkManager[978]: <warn> /sys/devices/virtual/net/lo: couldn't determine device driver; ignoring... Nov 2 13:50:22 Cobalt NetworkManager[978]: <warn> /sys/devices/virtual/net/lo: couldn't determine device driver; ignoring... Nov 2 13:50:22 Cobalt NetworkManager[978]: <warn> bluez error getting default adapter: Message did not receive a reply (timeout by message bus) Nov 2 13:50:22 Cobalt NetworkManager[978]: <warn> bluez error getting default adapter: Message did not receive a reply (timeout by message bus) Nov 2 13:50:22 Cobalt kernel: [ 28.688167] type=1400 audit(1351882222.452:10): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=1046 comm="apparmor_parser" Nov 2 13:50:22 Cobalt bluetoothd[1062]: Failed to init network plugin Nov 2 13:50:22 Cobalt NetworkManager[978]: <warn> bluez error getting default adapter: Message did not receive a reply (timeout by message bus) Nov 2 13:50:22 Cobalt bluetoothd[1118]: Failed to init network plugin Nov 2 13:50:22 Cobalt NetworkManager[978]: <warn> bluez error getting default adapter: Message did not receive a reply (timeout by message bus) Nov 2 13:50:22 Cobalt bluetoothd[1237]: Failed to init network plugin Nov 2 13:50:22 Cobalt NetworkManager[978]: <warn> bluez error getting default adapter: Message did not receive a reply (timeout by message bus) ps aux | grep -i network root 978 0.0 0.1 23732 4808 ? Ssl 13:50 0:00 NetworkManager sudo modprobe -r forcedeth FATAL: Module forcedeth not found

    Read the article

  • 11gR2 RAC ASM????

    - by Liu Maclean(???)
    11gR2 RAC?ocr?votedisk???????ASM??, ????10g??????2?RAC????????????,  ?? 11gR2 ?ASM?spfile??????ASM diskgroup???????ASM??????? ????????????,????? ASM?????mount diskgroup??????diskgroup????, ??ASM??????ASM spfile????????,?2???????? ????T.askmaclean.com?????ASM?????: hello maclean, ??spfile??ASMCMD> spget+CRSDG/rac/asmparameterfile/registry.253.787925627?????,ASM ?????ORACLE instance,?????????????diskgroup,????????????????????????????thanks.! ?????????: ?11.2??Oracle Cluterware??voting disk files?????????11.1?10.2????,11.2??voting disk file??????OCR?, ?????11.2??ocr?votedisk?????ASM? , ???11.2?voting disk file??GPNP profile??CSS voting file discovery string???? CSS voting disk file?discovery string???ASM,??????ASM discovery string???  ????????udev???????ASM???LUN, ??udev????????/dev/rasm-disk* , ????gpnptool get????gpnp profile: [grid@maclean1 trace]$ gpnptool get Warning: some command line parameters were defaulted. Resulting command line: /g01/grid/app/11.2.0/grid/bin/gpnptool.bin get -o- <?xml version="1.0" encoding="UTF-8"?><gpnp:GPnP-Profile Version="1.0" xmlns="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:gpnp="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:orcl="http://www.oracle.com/gpnp/2005/11/gpnp-profile" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd" ProfileSequence="9" ClusterUId="452185be9cd14ff4ffdc7688ec5439bf" ClusterName="maclean-cluster" PALocation=""><gpnp:Network-Profile><gpnp:HostNetwork id="gen" HostName="*"><gpnp:Network id="net1" IP="192.168.1.0" Adapter="eth0" Use="public"/><gpnp:Network id="net2" IP="172.168.1.0" Adapter="eth1" Use="cluster_interconnect"/></gpnp:HostNetwork></gpnp:Network-Profile>< orcl:CSS-Profile id="css" DiscoveryString="+asm" LeaseDuration="400"/><orcl:ASM-Profile id="asm" DiscoveryString="/dev/rasm*" SPFile="+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933"/>< ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"><ds:SignedInfo><ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><ds:Reference URI=""><ds:Transforms><ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <InclusiveNamespaces xmlns="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="gpnp orcl xsi"/></ds:Transform></ds:Transforms>< ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>L1SLg10AqGEauCQ4ne9quucITZA=</ds:DigestValue>< /ds:Reference></ds:SignedInfo><ds:SignatureValue>rTyZm9vfcQCMuian6isnAThUmsV4xPoK2fteMc1l0GIvRvHncMwLQzPM/QrXCGGTCEvgvXzUPEKzmdX2oy5vLcztN60UHr6AJtA2JYYodmrsFwEyVBQ1D6wH+HQiOe2SG9UzdQnNtWSbjD4jfZkeQWyMPfWdKm071Ek0Rfb4nxE=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile> Success. ?????2???: <orcl:CSS-Profile id=”css” DiscoveryString=”+asm” LeaseDuration=”400?/>==»css voting disk??+ASM<orcl:ASM-Profile id=”asm” DiscoveryString=”/dev/rasm*” SPFile=”+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933?/>==»??????ASM?DiscoveryString=”/dev/rasm*”,?ASM??????????????,SPFILE???ASM Parameter FILE?ALIAS ???????GPNP???ASM Parameter FILE?ALIAS,?????ASM???????SPFILE,???Diskgroup?Mount???????ASM ALIAS?????? ??????+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933??SPFILE?ASM??????: [grid@maclean1 wallets]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Tue Jul 17 05:45:35 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> set linesize 140 pagesize 1400 col "FILE NAME" format a40 set head on select NAME "FILE NAME", AU_KFFXP "AU NUMBER", NUMBER_KFFXP "FILE NUMBER", DISK_KFFXP "DISK NUMBER" from x$kffxp, v$asm_alias where GROUP_KFFXP = GROUP_NUMBER and NUMBER_KFFXP = FILE_NUMBER and name in ('REGISTRY.253.788682933') order by DISK_KFFXP,AU_KFFXP; FILE NAME AU NUMBER FILE NUMBER DISK NUMBER ---------------------------------------- ---------- ----------- ----------- REGISTRY.253.788682933 39 253 1 REGISTRY.253.788682933 35 253 3 REGISTRY.253.788682933 35 253 4 SQL> col path for a50 SQL> select disk_number,path from v$asm_disk where disk_number in (1,3,4) and GROUP_NUMBER=3; DISK_NUMBER PATH ----------- -------------------------------------------------- 3 /dev/rasm-diske 4 /dev/rasm-diskf 1 /dev/rasm-diskc ?????ASM SPFILE??????(redundancy=high),????? /dev/rasm-diskc?AU=39?/dev/rasm-diske AU=35?/dev/rasm-diskf AU=35? ????kfed?????????ASM DISK?header: [grid@maclean1 wallets]$ kfed read /dev/rasm-diske|grep spfile kfdhdb.spfile: 35 ; 0x0f4: 0x00000023 [grid@maclean1 wallets]$ kfed read /dev/rasm-diskc|grep spfile kfdhdb.spfile: 39 ; 0x0f4: 0x00000027 [grid@maclean1 wallets]$ kfed read /dev/rasm-diskf|grep spfile kfdhdb.spfile: 35 ; 0x0f4: 0x00000023 ????ASM disk header?kfdhdb.spfile??ASM SPFILE???DISK??AU NUMBER????, ASM???????????GPNP PROFILE?? DiscoveryString?????????,????ASM disk header?????kfdhdb.spfile??????,?????MOUNT DISKGROUP??????ASM SPFILE,?????ASM, ?????????????????

    Read the article

  • 'EditItem' is not allowed for this view - databinding issue

    - by Pawan
    Hi, I am trying to do data binding in WPF on data grid using a cutom list. My custom list class contains a private data list of type List. I can not expose this list however the indexers are exposed for seeting and getting individual items. My custom class looks like this: public abstract class TestElementList<T> : IEnumerable where T : class { protected List<T> Data { get; set; } public virtual T Get(int index) { T item = Data[index]; return item; } public virtual void Set(int index, T item) { Data[index] = item; } ... } The data is binded but when I try to edit it, i get "'EditItem' is not allowed for this view." Error. On doing extensive searching over web, I found that i might need to implement IEditableCollectionView interface also. Can anybody please help me to either give pointers on how to implement this interface or any suggest any other better way to do databinding on custom list. Thanks in advance.

    Read the article

  • PHP Codeigniter error: call to undefined method ci_db_mysql_driver::result()

    - by Ronnie
    I was trying to create an xml response using codeigniter. The following error gets thrown when i run the code. This page contains the following errors: error on line 1 at column 48: Extra content at the end of the document <?php class Api extends CI_Controller{ function index() { $this->load->helper('url', 'xml', 'security'); echo '<em>oops! no parameters selected.</em>'; } function authorize($email = 'blank', $password = 'blank') { header ("content-type: text/xml"); echo '<?xml version="1.0" encoding="ISO-8859-1"?>'; echo '<node>'; if ($email == 'blank' AND $password == 'blank') { echo '<response>failed</response>'; } else { $this->db->where('email_id', $email); $this->db->limit(1); $query = $this->db->from('lp_user_master'); $this->get(); $count = $this->db->count_all_results(); if ($count > 0) { foreach ($query->result() as $row){ echo '<ip>'.$row->title.'</ip>'; } } } echo '</node>'; } } ?>

    Read the article

  • Struggling running ASP MVC2 on IIS6.0

    - by Luke
    Hi I could use a little Help using MVC2 on an IIS6.0 Its an MVC2 RC2 [.NET 3.5]. I followed the famous Haacked Tutorial, created a virtual folder, created a Default.aspx Website for my Project, put everything to my virtual folder. The routing is modified, using wildcard mapping [anyway its not running without, too], according to the Tutorial. I also checked, that all Webservices are running [asp.net 2 / asp.net 4 / active server pages]. The routing is working fine on my development machine, even checked it using Haacks routing debugger ... [http://haacked.com/archive/2008/03/13/url-routing-debugger.aspx] Seems fine so far, but I get only 404 - not found errors. Is there something I might be missing ? Global.asax.cs public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", "{controller}.mvc/{action}/{id}", new { action = "Index", id = "" } ); routes.MapRoute( "Root", "", new { controller = "Home", action = "Index", id = "" } ); } Default.aspx.cs public void Page_Load(object sender, System.EventArgs e) { // Change the current path so that the Routing handler can correctly interpret // the request, then restore the original path so that the OutputCache module // can correctly process the response (if caching is enabled). string originalPath = Request.Path; HttpContext.Current.RewritePath(Request.ApplicationPath, false); IHttpHandler httpHandler = new MvcHttpHandler(); httpHandler.ProcessRequest(HttpContext.Current); HttpContext.Current.RewritePath(originalPath, false); }

    Read the article

  • Nhibernate exception - No persister for

    - by Muhammad Akhtar
    I am getting exception when calling Stored Procedure using Nhibernate and here is the exception No persister for: ReleaseDAL.ProgressBars, ReleaseDAL, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null here is class file public class ProgressBars { public ProgressBars() { } private Int32 _Tot; private Int32 _subtot; public virtual Int32 Tot {get { return _Tot; } set { _Tot = value; } } public virtual Int32 subtot { get { return _subtot; } set { _subtot = value; }} } here is my mapping file <hibernate-mapping xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="urn:nhibernate-mapping-2.2" assembly="ReleaseDAL" namespace="ReleaseDAL" > <sql-query name="ps_getProgressBarData1" > <return alias="ProgressBars" class="ProgressBars"> <return-property name="Tot" column="Tot"/> <return-property name="subtot" column="subtot"/> </return> exec ps_getProgressBarData1 </sql-query> </hibernate-mapping> Calling Code public List<ProgressBars> getProgressBarData1(Guid ReleaseId) { ISession session = NHibernateHelper.GetCurrentSession(); List< ProgressBars> progressBar = (List<ProgressBars>)session.GetNamedQuery("ps_getProgressBarData1").List<ProgressBars>(); NHibernateHelper.CloseSession(); return progressBar; } I am introuble due to this exception, I did lot Google but no success, Please give idea to solve this problem. Thanks.........

    Read the article

  • Cant insert row with auto-increment key via FluentNhibernate

    - by Jeff Shattock
    I'm getting started with Fluent NHibernate, and NHibernate in general. I'm trying to do something that I feel is pretty basic, but I cant quite get it to work. I'm trying to add a new entry to a simple table. Here's the Entity class. public class Product { public Product() { id = 0; } public virtual int id {get; set;} public virtual string description { get; set; } } Here's its mapping. public class ProductMap : ClassMap<Product> { public ProductMap() { Id(p => p.id).GeneratedBy.Identity().UnsavedValue(0); Map(p => p.description); } } I've tried that with and without the additional calls after Id(). And the insert code: var p = new Product() { description = "Apples" }; using (var s = _sf.CreateSession()) { s.Save(new_product); s.Flush(); } where _sf is a properly configured SessionSource. When I execute this code, I get: NHibernate.AssertionFailure : null identifier, which makes sense based on the SQL that NHibernate is executing: INSERT INTO "Product" (description) VALUES (@p0);@p0 = 'Apples' It doesnt seem to be trying to set the Id field, which seems ok (on its face) since the DB should generate that. But its not, I think. The DB schema is autogenerated by FNH: var config = Fluently.Configure().Database(MsSqlCeConfiguration.Standard.ShowSql().ConnectionString(@"Data Source=Database1.sdf")); var SessionSource = new SessionSource(config.BuildConfiguration().Properties, new ModelMappings()); var Session = SessionSource.CreateSession(); SessionSource.BuildSchema(Session); CreateInitialData(Session); Session.Flush(); Session.Clear(); I'm sure to be doing tons of things wrong, but whats the one thats causing this error?

    Read the article

  • MVC2: Validate PartialView before Form Submit of Page containing Partial View

    - by Pascal
    I am using asp.net mvc2 and having a basic Page that includes a Partial View within a form <% using (Html.BeginForm()) { %> <% Html.RenderAction("partialViewActionName", "Controllername"); %> <input type="submit" value="Weiter" /> <% } %> When I submit the form, the httpPost Action of my Page is called, and AFTER that the httpPost Action of my Partial View is called [HttpPost] public virtual ActionResult PagePostMethod(myModel model) { // here I should know about the validation of my partial View // If partialView.ModelState is valid then // return View("success"); // else return View(model) } [HttpPost] public virtual ActionResult partialViewActionName(myModel model) { ModelState.AddModelError("Error"); return View(model); } But as I am doing the Validation in the httpPost Method of my Partial View (because I want to use my Partial View in several Places) I cant decide if my hole page is valid or not. Has anyone an Idea how I could do this? Isn´t it a common task to have several partial Views in a page but have the information about validation in the page action methods? Thanks very much for your help!!

    Read the article

  • How to analyze 'dbcc memorystatus' result in SQL Server 2008

    - by envykok
    Currently i am facing a sql memory pressure issue. i have run 'dbcc memorystatus', here is part of my result: Memory Manager KB VM Reserved 23617160 VM Committed 14818444 Locked Pages Allocated 0 Reserved Memory 1024 Reserved Memory In Use 0 Memory node Id = 0 KB VM Reserved 23613512 VM Committed 14814908 Locked Pages Allocated 0 MultiPage Allocator 387400 SinglePage Allocator 3265000 MEMORYCLERK_SQLBUFFERPOOL (node 0) KB VM Reserved 16809984 VM Committed 14184208 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 0 MultiPage Allocator 408 MEMORYCLERK_SQLCLR (node 0) KB VM Reserved 6311612 VM Committed 141616 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 1456 MultiPage Allocator 20144 CACHESTORE_SQLCP (node 0) KB VM Reserved 0 VM Committed 0 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 3101784 MultiPage Allocator 300328 Buffer Pool Value Committed 1742946 Target 1742946 Database 1333883 Dirty 940 In IO 1 Latched 18 Free 89 Stolen 408974 Reserved 2080 Visible 1742946 Stolen Potential 1579938 Limiting Factor 13 Last OOM Factor 0 Page Life Expectancy 5463 Process/System Counts Value Available Physical Memory 258572288 Available Virtual Memory 8771398631424 Available Paging File 16030617600 Working Set 15225597952 Percent of Committed Memory in WS 100 Page Faults 305556823 System physical memory high 1 System physical memory low 0 Process physical memory low 0 Process virtual memory low 0 Procedure Cache Value TotalProcs 11382 TotalPages 430160 InUsePages 28 Can you lead me to analyze this result ? Is it a lot execute plan have been cached causing the memory issue or other reasons?

    Read the article

  • How to Impersonate a user for a file copy over the network when dns or netbios is not available

    - by Scott Chamberlain
    I have ComputerA on DomainA running as userA needing to copy a very large file to ComputerB on WorkgroupB which has the ip of 192.168.10.2 to a windows share that only userB has write access to. There is no netbios or dns resolving so the computer must be refrenced by IP I first I tried AppDomain.CurrentDomain.SetPrincipalPolicy(System.Security.Principal.PrincipalPolicy.WindowsPrincipal); WindowsIdentity UserB = new WindowsIdentity("192.168.10.2\\UserB", "PasswordB"); //Execption WindowsImpersonationContext contex = UserB.Impersonate() File.Copy(@"d:\bigfile", @"\\192.168.10.2\bifgile"); contex.Undo(); but I get a System.Security.SecurityException "The name provided is not a properly formed account name." So I tried AppDomain.CurrentDomain.SetPrincipalPolicy(System.Security.Principal.PrincipalPolicy.WindowsPrincipal); WindowsIdentity webinfinty = new WindowsIdentity("ComputerB\\UserB", "PasswordB"); //Execption But I get "Logon failure: unknown user name or bad password." error instead. so then I tried IntPtr token; bool succeded = LogonUser("UserB", "192.168.10.2", "PasswordB", LogonTypes.Network, LogonProviders.Default, out token); if (!succeded) { throw new Win32Exception(Marshal.GetLastWin32Error()); } WindowsImpersonationContext contex = WindowsIdentity.Impersonate(token); (...) [DllImport("advapi32.dll", SetLastError = true)] static extern bool LogonUser( string principal, string authority, string password, LogonTypes logonType, LogonProviders logonProvider, out IntPtr token); but LogonUser returns false with the win32 error "Logon failure: unknown user name or bad password" I know my username and password are fine, I have logged on to computerB as that user. Any reccomandations

    Read the article

  • SSRS 2008 Report Manager Error

    - by Nick
    I have just installed SQL Server 2008 including Reporting Services on Windows Server 2003. I'm having a problem though accessing the Report Manager. When the Reporting Service is first started I can access it fine but after maybe an hour when I try and access it I get an error saying: Unable to connect to the remote server. The reporting service is still running at this point. I can connect to it through Reporting Services Configuration Manager and clicking on the Web Service URL gives a directory listing (I assume that is correct behaviour). If I stop and start the service through Reporting Services Configuration Manager then I can access Report Manager once again (although in maybe an hour I will get the same error once again). I've installed the latest SP1 service pack. I'm using the same domain account to run all the SQL services. The report server is set to use the default ReportServer virtual directory, is set to IP address All Assigned, TCP Port 80 and no SSL certificate. The report manager is set to use the default Reports virtual directory, IP address All Assigned, TCP Port 80 and no SSL certificates. In the log file I get an error: Unable to connect to remote server HTTP status code 500 An attempt was made to access a socket in a way forbidden by its access permissions. Does anyone have any idea why this is happening? I've searched the net but haven't been able to find a solution.

    Read the article

  • Development process for an embedded project with significant hardware changes

    - by pierr
    I have a good idea about Agile development process but it seems it does not fit well with a embedded project with significant hardware changes. I will describe below what we are currently doing (Ad-hoc way, no defined process yet). The changes are divided into three categories and different processes are used for each of them: complete hardware change example : use a different video codec IP a) Study the new IP b) RTL/FPGA simulation c) Implement the legacy interface - go to b) d) Wait until hardware (tape out) is ready f) Test on the real hardware hardware improvement example : enhance the image display quality by improving the underlying algorithm a) RTL/FPGA simulation b) Wait until hardware and test on the hardware Minor change example : only change hardware register mapping a) Wait until hardware and test on the hardware The worry is it seems we don't have too much control and confidence about software maturity for the hardware changes as the bring-up schedule is always very tight and the customer desired a seamless change when updating to a new version of hardware. How did you manage this kind of hardware change? Did you solve that by a Hardware Abstraction Layer (HAL)? Did you have a automatic test for the HAL layer? How did you test when the hardware platform is not even ready? Do you have well-documented processes for this kind of change?

    Read the article

  • "Admin module" taking over [Yii framework]

    - by Flavius
    Hi I have an "admin" module and I want it to serve "dynamic controllers", i.e. to provide a default behaviour for controllers which don't really exist ("virtual controllers"). I've invented a lightweight messaging mechanism for loose communication between modules. I'd like to use it such that when e.g. ?r=admin/users/index is requested, it will call the "virtual controller" "UserController" of AdminModule, which would, by default, use this messaging mechanism to notify the real module "UsersModule" it can answer to the request. I thought about simulating this behaviour in AdminModule::init(), but at that point I have no way of deciding whether the action can be processed by a real controller or not, or at least I don't know how to do it. This is because of the way Yii works: bottom-up, the controller is the one that renders the view AND the application layout (or the module's, if it exists), for example. I don't think the module even has a word to say about handling a given controller+action or not. To recap, I'm looking for a kind of CWebModule::missingController($controllerId,$actionId), just like CController::missingAction($actionId), or a workaround to simulate that. That would possibly be in CWebModule::init() or somewhere where I can find out whether the controller actually exists, in which case it's his job to handle it the $actionID and $controllerID whether the module $controllerID exists (I didn't type it wrong, in r=admin/users/index, "users" is the actual module, as specified in the application's config).

    Read the article

  • Fluent nhibernate: Enum in composite key gets mapped to int when I need string

    - by Quintin Par
    By default the behaviour of FNH is to map enums to its string in the db. But while mapping an enum as part of a composite key, the property gets mapped as int. e.g. in this case public class Address : Entity { public Address() { } public virtual AddressType Type { get; set; } public virtual User User { get; set; } Where AddresType is of public enum AddressType { PRESENT, COMPANY, PERMANENT } The FNH mapping is as mapping.CompositeId().KeyReference(x => x.User, "user_id").KeyProperty(x => x.Type); the schema creation of this mapping results in create table address ( Type INTEGER not null, user_id VARCHAR(25) not null, and the hbm as <composite-id mapped="true" unsaved-value="undefined"> <key-property name="Type" type="Company.Core.AddressType, Company.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"> <column name="Type" /> </key-property> <key-many-to-one name="User" class="Company.Core.CompanyUser, Company.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"> <column name="user_id" /> </key-many-to-one> </composite-id> Where the AddressType should have be generated as type="FluentNHibernate.Mapping.GenericEnumMapper`1[[Company.Core.AddressType, How do I instruct FNH to mappit as the default string enum generic mapper?

    Read the article

  • IIS 7 can't connect to SQLServer 2008

    - by Nicolas Cadilhac
    Sorry if this is the most seen question on the web, but this is my turn. I am trying to publish my asp.net mvc app on IIS 7 under MS Sql Server 2008. I am on a Windows Server 2008 virtual machine. I get the following classical error: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) Under SQLServer, Allow remote connections is checked. My connection string is: Data Source=.\MSSQLSERVER;Initial Catalog=mydbname;User Id=sa;Password=mypassword I also tried with no username/password and "Integrated Security=true". There is only one instance of SQLServer installed. I tried to access my web page locally and remotely. There is no active firewall on the virtual machine. Thx for your help.

    Read the article

  • handling refrence to pointers/double pointers using SWIG [C++ to Java]

    - by Siddu
    My code has an interface like class IExample { ~IExample(); //pure virtual methods ...}; a class inheriting the interface like class CExample : public IExample { protected: CExample(); //implementation of pure virtual methods ... }; and a global function to create object of this class - createExample( IExample *& obj ) { obj = new CExample(); } ; Now, I am trying to get Java API wrapper using SWIG, the SWIG generated interface has a construcotr like - IExample(long cPtr, boolean cMemoryOwn) and global function becomes createExample(IExample obj ) The problem is when i do, IExample exObject = new IExample(LogFileLibraryJNI.new_plong(), true /*or false*/ ); createExample( exObject ); The createExample(...) API at C++ layer succesfully gets called, however, when call returns to Java layer, the cPtr (long) variable does not get updated. Ideally, this variable should contain address of CExample object. I read in documentation that typemaps can be used to handle output parameters and pointer references as well; however, I am not able to figure out the suitable way to use typemaps to resolve this problem, or any other workaround. Please suggest if i am doing something wrong, or how to use typemap in such situation?

    Read the article

  • error C2504: 'BASECLASS' : base class undefined

    - by numerical25
    I checked out a post similar to this but the linkage was different the issue was never resolved. The problem with mine is that for some reason the linker is expecting there to be a definition for the base class, but the base class is just a interface. Below is the error in it's entirety c:\users\numerical25\desktop\intro todirectx\godfiles\gxrendermanager\gxrendermanager\gxrendermanager\gxdx.h(2) : error C2504: 'GXRenderer' : base class undefined Below is the code that shows how the headers link with one another GXRenderManager.h #ifndef GXRM #define GXRM #include <windows.h> #include "GXRenderer.h" #include "GXDX.h" #include "GXGL.h" enum GXDEVICE { DIRECTX, OPENGL }; class GXRenderManager { public: static int Ignite(GXDEVICE); private: static GXRenderer *renderDevice; }; #endif at the top of GxRenderManager, there is GXRenderer , windows, GXDX, GXGL headers. I am assuming by including them all in this document. they all link to one another as if they were all in the same document. correct me if I am wrong cause that's how a view headers. Moving on... GXRenderer.h class GXRenderer { public: virtual void Render() = 0; virtual void StartUp() = 0; }; GXGL.h class GXGL: public GXRenderer { public: void Render(); void StartUp(); }; GXDX.h class GXDX: public GXRenderer { public: void Render(); void StartUp(); }; GXGL.cpp and GXDX.cpp respectively #include "GXGL.h" void GXGL::Render() { } void GXGL::StartUp() { } //...Next document #include "GXDX.h" void GXDX::Render() { } void GXDX::StartUp() { } Not sure whats going on. I think its how I am linking the documents, I am not sure.

    Read the article

  • Where exactly should i add crossdomain.xml file?

    - by SpikETidE
    Hi everyone... i am trying to create a internet radio.... I use icecast2 for streaming..... edcast plugin with winamp to send the music to icecast... and the xspf web music player (http://musicplayer.sourceforge.net/) to connect the user to the icecast server and play the music.... The setup works great and i can broadcast and receive on the local network i use to test the radio.. using xampp... Now the icecast broadcasts online from a windows server with the ip address say xx.xx.xxx.xxx The webpage in which the flash player is embedded is uploaded to www.xyz.com/images/radio This domain has the same ip address from where the icecast server runs. Now when i run the webpage to connect to the radio with the flash player, i get the error in firebug as "xx.xx.xxx.xxx:8000/crossdomain.xml 404 NOT FOUND" But i have created a crossdomain.xml file in the root of the xx.xx.xxx.xxx server... Still it doesn't recognize the file... Can anyone tell me where exactly i should create the file for my setting...??? Thanks a lot in advance.....

    Read the article

  • Need help using the DefaultModelBinder for a nested model.

    - by Will
    There are a few related questions, but I can't find an answer that works. Assuming I have the following models: public class EditorViewModel { public Account Account {get;set;} public string SomeSimpleStuff {get;set;} } public class Account { public string AccountName {get;set;} public int MorePrimitivesFollow {get;set;} } and a view that extends ViewPage<EditorViewModel> which does the following: <%= Html.TextBoxFor(model => model.Account.AccountName)%> <%= Html.ValidationMessageFor(model => model.Account.AccountName)%> <%= Html.TextBoxFor(model => model.SomeSimpleStuff )%> <%= Html.ValidationMessageFor(model => model.SomeSimpleStuff )%> and my controller looks like: [HttpPost] public virtual ActionResult Edit(EditorViewModel account) { /*...*/ } How can I get the DefaultModelBinder to properly bind my EditorViewModel? Without doing anything special, I get an empty instance of my EditorViewModel with everything null or default. The closest I've come is by calling UpdateModel manually: [HttpPost] public virtual ActionResult Edit(EditorViewModel account) { account.Account = new Account(); UpdateModel(account.Account, "Account"); // this kills me: UpdateModel(account); This successfully updates my Account property model, but when I call UpdateModel on account (to get the rest of the public properties of my EditorViewModel) I get the completely unhelpful "The model of type ... could not be updated." There is no inner exception, so I can't figure out what's going wrong. What should I do with this?

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >