Search Results

Search found 2859 results on 115 pages for 'enum flags'.

Page 68/115 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Typechecking macro arguments in C

    - by Rocketmagnet
    Hi all, Is is possible to typecheck arguments to a #define macro? For example: typedef enum { REG16_A, REG16_B, REG16_C }REG16; #define read_16(reg16) read_register_16u(reg16); \ assert(typeof(reg16)==typeof(REG16)); The above code doesn't seem to work. What am I doing wrong? BTW, I am using gcc, and I can guarantee that I will always be using gcc in this project. The code does not need to be portable.

    Read the article

  • Why does GC.GetTotalMemory() report huge memory allocations?

    - by Seventh Element
    I have been playing around with GC.GetTotalMemory(). When I create a local variable of type Titles in the example below, the consumed amount of memory increases by 6276 bytes. What's going on here? class Program { enum Titles { Mr, Ms, Mrs, Dr }; static void Main(string[] args) { GetTotalMemory(); Titles t = Titles.Dr; GetTotalMemory(); } static void GetTotalMemory() { long bytes = GC.GetTotalMemory(true); Console.WriteLine("{0}", bytes); } }

    Read the article

  • How to force calling of QWidget::paintEvent() when its hovered by other window?

    - by Pie_Jesu
    Hi there. I have occured a problem: I'm writing a widget, which displays current date's day number. It's like a button, but it's not derived from QPushButton class. Just from QWidget. So I reimplemented enterEvent(), leaveEvent(), mousePressEvent(), mouseReleaseEvent(). I call update() inside these methods and widget has realistic button behavior (paintEvent() is reimplemented too). But when I change system date and hover that widget with other window, my widget doesn't calls paintEvent() and displays old date. Only when I place mouse over it, widget repaints it's contents. I guess there is an option, which paints old contents on hover event to avoid unnecessary paint events. But I need to disable it. Tried to set many attributes (Qt::WidgetAttribute enum). But it doesn't helps. Please, help me (and sorry for my bad english).

    Read the article

  • Getting changes in one column of an historical table

    - by Javi
    Hello, I have a table which stores historical data. It's mapped to an Entity with the following fields (I use JPA with Hibernate implementation): @Entity @Table(name="items_historical") public class ItemHistory{ private Integer id; private Date date; @Enumerated(EnumType.ORDINAL) private StatusEnum status @ManyToOne(optional=false) private User user; @ManyToOne(optional=false) private Item item; } public enum StatusEnum { OK, BAD,...//my status } In every row I store historical data of another table. I need to get the list of the changes on "status" column: the status, the date and the previous status on a specified item (It would be good as well getting the status and date when status was changed). I don't know if this is possible by using HQL. Thanks.

    Read the article

  • Bitwise setting in C++

    - by Sunil
    enum AccessSource { AccessSourceNull = 0x00000001, AccessSourceSec = 0x00000002, AccessSourceIpo = 0x00000004, AccessSourceSSA = 0x00000008, AccessSourceUpgrade = 0x00000010, AccessSourceDelta = 0x00000020, AccessSourcePhoneM = 0x00000040, AccessSourceSoft = 0x00000080, AccessSourceCR = 0x00000100, AccessSourceA = 0x00000200, AccessSourceE = 0x00000400, AccessSourceAll = 0xFFFFFFFF }; What is the value of AccessSourceAll ?? is it -1? or is it maximum value? I have a parameter ULONG x , whose default value is AccessSourceAll(that means access to all). How do i remove the access right of AccessSourceE only? How to add the access right of AccessSourceE again? If i have a particular value in x, then how do i know whether AccessSourceE is set or not?

    Read the article

  • How to return result set based on other rows

    - by understack
    I've 2 tables - packages and items. Items table contains all items belonging to the packages along with location information. Like this: Packages table id, name, type(enum{general,special}) 1, name1, general 2, name2, special Items table id, package_id, location 1, 1, America 2, 1, Africa 3, 1, Europe 4, 2, Europe Question: I want to find all 'special' packages belonging to a location and if no special package is found then it should return 'general' packages belonging to same location. So, for 'Europe' : package 2 should be returned since it is special package (Though package 1 also belongs to Europe but not required since its a general package) for 'America' : package 1 should be returned since there are no special packages

    Read the article

  • Which MySQL Datatype to use for storing boolean values from/to PHP?

    - by Beat
    Since MySQL doesn't seem to have any 'boolean' datatype, which datatype do you 'abuse' for storing true/false information in MySQL? Especially in the context of writing and reading from/to a PHP-Script. Over time I have used and seen several approaches: tinyint, varchar fields containing the values 0/1, varchar fields containing the strings '0'/'1' or 'true'/'false' and finally enum Fields containing the two options 'true'/'false'. None of the above seems optimal, I tend to prefer the tinyint 0/1 variant, since automatic type conversion in PHP gives me boolean values rather simply. So which datatype do you use, is there a type designed for boolean values which I have overlooked? Do you see any advantages/disadvantages by using one type or another?

    Read the article

  • C#: Converting string to namespace

    - by Am
    Hi, I have an interface named IHarvester. There are 3 implementations of that interface, each under their own namespace: Google Yahoo Bing A HarvesterManager uses the given harvester. It knows the interface and all 3 implementations. I want some way of letting the class user say in which harvester it wants to use. And in the code select that implementation, without a switch-case implementation. Can reflection save my day? Here is the code bits: // repeat for each harvester namespace Harvester.Google { public abstract class Fetcher : BaseHarvester, IInfoHarvester {...} } public enum HarvestingSource { Google, Yahoo, Bing, } class HarvesterManager { public HarvestingSource PreferedSource {get;set;} public HarvestSomthing() { switch (PreferedSource) .... // awful... } } Thanks.

    Read the article

  • including a std::map within a struct? Is it ok?

    - by user553514
    class X_class{ public: struct extra {int extra1; int extra2; int extra3; }; enum a { n,m}; struct x_struct{ char b; char c; int d; int e; std::map <int, extra> myExtraMap; }; }; in my code I define : x_struct myStruct; why do I get compile errors compiling the above class? The error either says: 1) expected ; before < on the line --- where I defined the map (above) if I eliminate std:: or 2) error: invalid use of ::; error: expected ; before < token

    Read the article

  • Java application return codes

    - by doele
    I have a Java program that processes one file at a time. This Java program is called from a wrapper script which logs the return code from the Java program. There are 2 types of errors. Expected errors and unexpected errors. In both cases I just need to log them. My wrapper knows about 3 different states. 0-OK, 1-PROCESSING_FAILED, 2- ERROR. Is this a valid approach? Here is my approach: enum ReturnCodes {OK,PROCESSING_FAILED,ERROR}; public static void main(String[] args) { ... proc.processMyFile(); ... System.exit(ReturnCodes.OK.ordinal()); } catch (Throwable t) { ... System.exit(ReturnCodes.ERROR.ordinal()); } private void processMyFile() { try { ... }catch( ExpectedException e) { ... System.exit(ReturnCodes.PROCESSING_FAILED.ordinal()); } }

    Read the article

  • Are booleans as method arguments unacceptable?

    - by koschi
    A colleague of mine states that booleans as method arguments are not acceptable. They shall be replaced by enumerations. At first I did not see any benefit, but he gave me an example. What's easier to understand? file.writeData( data, true ); Or enum WriteMode { Append, Overwrite }; file.writeData( data, Append ); Now I got it! ;-) This is definitely an example where an enumeration as second parameter makes the code much more readable. So, what's your opinion on this topic?

    Read the article

  • RESTful enums. string or Id?

    - by GazTheDestroyer
    I have a RESTful service that exposes enums. Should I expose them as localised strings, or plain integers? My leaning is toward integers for easy conversion at the service end, but in that case the client needs to grab a list of localised strings from somewhere in order to know what the enums mean. Am I just creating extra steps for nothing? There seems to be little information I can find about which is commonly done in RESTful APIs. EDIT: OK. Let's say I'm writing a website that stores information about people's pets. I could have an AnimalType enum 0 Dog 1 Cat 2 Rabbit etc. When people grab a particular pet resource, say /pets/1, I can either provide a meaningful localised string for the animal type, or just provide the ID and force them to do another look up via a /pets/types resource. Or should I provide both?

    Read the article

  • Selecting distinct values from mysql with largest timestamp

    - by user987048
    I am building a mail system. The inbox is only supposed to grab the last message (one with the highest time value) of a concatenation of user and sender, where the user or sender is the user ID. Here is the table structure: CREATE TABLE IF NOT EXISTS `mail` ( `user` int(11) NOT NULL, `sender` int(11) NOT NULL, `body` text NOT NULL, `new` enum('0','1') NOT NULL default '1', `time` int(11) NOT NULL, KEY `user` (`user`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; So, with a table with the following data: user sender new time ***************************************** 1 0 0 5 1 0 0 6 2 1 0 7 1 0 1 8 1 2 0 9 1 0 1 11 1 2 1 12 I want to select the following: WHERE USER OR SENDER = X (in this case, 1) user sender new time ***************************************** 2 1 0 7 1 2 0 9 1 0 1 11 How would I go about doing something like this?

    Read the article

  • C++ file including C header file

    - by fdeslaur
    I need to include a C header file in my C++ project but g++ throws "not declared in this scope" errors. I read that i need to use extern "C" keyword to fix it but it didn't seem to work for me. Here is a dummy example triggering this error. main.cpp: #include <iostream> extern "C" { #include "includedFile.h" } int main() { int a = 2; int b = 1212; std::cout<< "Hello World!\n"; return 0; } includedFile.h #include <stdint.h> enum TypeOfEnum { ONE, TWO, THREE, FOUR = INT32_MAX, }; The error thrown is : $> g++ main.cpp In file included from main.cpp:4:0: includedFile.h:7:9: error: ‘INT32_MAX’ was not declared in this scope FOUR = INT32_MAX, I saw on this post that I may need #define __STDC_LIMIT_MACROS without any success. Any help is welcome!

    Read the article

  • How do I mount a "DiskSecure Multiboot" partition?

    - by ????
    For a hard drive that has 4 or 5 partitions, I was able to mount one of them using Ubuntu LiveCD: sudo mount /dev/sda1 /mnt but is there a way to mount to the other partitions? (if using sudo fdisk -l, it only shows /dev/sda) GParted's snapshot is: Right now, the fdisk info is as follows: ubuntu@ubuntu:~$ sudo fdisk -l /dev/sda Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1aca8ea5 Device Boot Start End Blocks Id System /dev/sda1 284993226 350602558 32804666+ 7 HPFS/NTFS/exFAT and then ubuntu@ubuntu:/mnt$ sudo fdisk -l /dev/sda1 Disk /dev/sda1: 33.6 GB, 33591978496 bytes 255 heads, 63 sectors/track, 4083 cylinders, total 65609333 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x2052474d This doesn't look like a partition table Probably you selected the wrong device. Device Boot Start End Blocks Id System /dev/sda1p1 ? 6579571 1924427647 958924038+ 70 DiskSecure Multi-Boot /dev/sda1p2 ? 1953251627 3771827541 909287957+ 43 Unknown /dev/sda1p3 ? 225735265 225735274 5 72 Unknown /dev/sda1p4 2642411520 2642463409 25945 0 Empty Partition table entries are not in disk order Per @lgarzo's request, parted info is: ubuntu@ubuntu:/mnt$ sudo parted /dev/sda print Model: ATA ST3320820AS (scsi) Disk /dev/sda: 320GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 146GB 180GB 33.6GB primary ntfs boot The command sudo mount /dev/sda1p2 /mnt won't work.

    Read the article

  • Access violation in DirectX OMSetRenderTargets

    - by IDWMaster
    I receive the following error (Unhandled exception at 0x527DAE81 (d3d11_1sdklayers.dll) in Lesson2.Triangles.exe: 0xC0000005: Access violation reading location 0x00000000) when running the Triangle sample application for DirectX 11 in D3D_FEATURE_LEVEL_9_1. This error occurs at the OMSetRenderTargets function, as shown below, and does not happen if I remove that function from the program (but then, the screen is blue, and does not render the triangle) //// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF //// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO //// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A //// PARTICULAR PURPOSE. //// //// Copyright (c) Microsoft Corporation. All rights reserved #include #include #include "DirectXSample.h" #include "BasicMath.h" #include "BasicReaderWriter.h" using namespace Microsoft::WRL; using namespace Windows::UI::Core; using namespace Windows::Foundation; using namespace Windows::ApplicationModel::Core; using namespace Windows::ApplicationModel::Infrastructure; // This class defines the application as a whole. ref class Direct3DTutorialViewProvider : public IViewProvider { private: CoreWindow^ m_window; ComPtr m_swapChain; ComPtr m_d3dDevice; ComPtr m_d3dDeviceContext; ComPtr m_renderTargetView; public: // This method is called on application launch. void Initialize( _In_ CoreWindow^ window, _In_ CoreApplicationView^ applicationView ) { m_window = window; } // This method is called after Initialize. void Load(_In_ Platform::String^ entryPoint) { } // This method is called after Load. void Run() { // First, create the Direct3D device. // This flag is required in order to enable compatibility with Direct2D. UINT creationFlags = D3D11_CREATE_DEVICE_BGRA_SUPPORT; #if defined(_DEBUG) // If the project is in a debug build, enable debugging via SDK Layers with this flag. creationFlags |= D3D11_CREATE_DEVICE_DEBUG; #endif // This array defines the ordering of feature levels that D3D should attempt to create. D3D_FEATURE_LEVEL featureLevels[] = { D3D_FEATURE_LEVEL_11_1, D3D_FEATURE_LEVEL_11_0, D3D_FEATURE_LEVEL_10_1, D3D_FEATURE_LEVEL_10_0, D3D_FEATURE_LEVEL_9_3, D3D_FEATURE_LEVEL_9_1 }; ComPtr d3dDevice; ComPtr d3dDeviceContext; DX::ThrowIfFailed( D3D11CreateDevice( nullptr, // specify nullptr to use the default adapter D3D_DRIVER_TYPE_HARDWARE, nullptr, // leave as nullptr if hardware is used creationFlags, // optionally set debug and Direct2D compatibility flags featureLevels, ARRAYSIZE(featureLevels), D3D11_SDK_VERSION, // always set this to D3D11_SDK_VERSION &d3dDevice, nullptr, &d3dDeviceContext ) ); // Retrieve the Direct3D 11.1 interfaces. DX::ThrowIfFailed( d3dDevice.As(&m_d3dDevice) ); DX::ThrowIfFailed( d3dDeviceContext.As(&m_d3dDeviceContext) ); // After the D3D device is created, create additional application resources. CreateWindowSizeDependentResources(); // Create a Basic Reader-Writer class to load data from disk. This class is examined // in the Resource Loading sample. BasicReaderWriter^ reader = ref new BasicReaderWriter(); // Load the raw vertex shader bytecode from disk and create a vertex shader with it. auto vertexShaderBytecode = reader-ReadData("SimpleVertexShader.cso"); ComPtr vertexShader; DX::ThrowIfFailed( m_d3dDevice-CreateVertexShader( vertexShaderBytecode-Data, vertexShaderBytecode-Length, nullptr, &vertexShader ) ); // Create an input layout that matches the layout defined in the vertex shader code. // For this lesson, this is simply a float2 vector defining the vertex position. const D3D11_INPUT_ELEMENT_DESC basicVertexLayoutDesc[] = { { "POSITION", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }, }; ComPtr inputLayout; DX::ThrowIfFailed( m_d3dDevice-CreateInputLayout( basicVertexLayoutDesc, ARRAYSIZE(basicVertexLayoutDesc), vertexShaderBytecode-Data, vertexShaderBytecode-Length, &inputLayout ) ); // Load the raw pixel shader bytecode from disk and create a pixel shader with it. auto pixelShaderBytecode = reader-ReadData("SimplePixelShader.cso"); ComPtr pixelShader; DX::ThrowIfFailed( m_d3dDevice-CreatePixelShader( pixelShaderBytecode-Data, pixelShaderBytecode-Length, nullptr, &pixelShader ) ); // Create vertex and index buffers that define a simple triangle. float3 triangleVertices[] = { float3(-0.5f, -0.5f,13.5f), float3( 0.0f, 0.5f,0), float3( 0.5f, -0.5f,0), }; D3D11_BUFFER_DESC vertexBufferDesc = {0}; vertexBufferDesc.ByteWidth = sizeof(float3) * ARRAYSIZE(triangleVertices); vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT; vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = 0; vertexBufferDesc.MiscFlags = 0; vertexBufferDesc.StructureByteStride = 0; D3D11_SUBRESOURCE_DATA vertexBufferData; vertexBufferData.pSysMem = triangleVertices; vertexBufferData.SysMemPitch = 0; vertexBufferData.SysMemSlicePitch = 0; ComPtr vertexBuffer; DX::ThrowIfFailed( m_d3dDevice-CreateBuffer( &vertexBufferDesc, &vertexBufferData, &vertexBuffer ) ); // Once all D3D resources are created, configure the application window. // Allow the application to respond when the window size changes. m_window-SizeChanged += ref new TypedEventHandler( this, &Direct3DTutorialViewProvider::OnWindowSizeChanged ); // Specify the cursor type as the standard arrow cursor. m_window-PointerCursor = ref new CoreCursor(CoreCursorType::Arrow, 0); // Activate the application window, making it visible and enabling it to receive events. m_window-Activate(); // Enter the render loop. Note that tailored applications should never exit. while (true) { // Process events incoming to the window. m_window-Dispatcher-ProcessEvents(CoreProcessEventsOption::ProcessAllIfPresent); // Specify the render target we created as the output target. ID3D11RenderTargetView* targets[1] = {m_renderTargetView.Get()}; m_d3dDeviceContext-OMSetRenderTargets( 1, targets, NULL // use no depth stencil ); // Clear the render target to a solid color. const float clearColor[4] = { 0.071f, 0.04f, 0.561f, 1.0f }; //Code fails here m_d3dDeviceContext-ClearRenderTargetView( m_renderTargetView.Get(), clearColor ); m_d3dDeviceContext-IASetInputLayout(inputLayout.Get()); // Set the vertex and index buffers, and specify the way they define geometry. UINT stride = sizeof(float3); UINT offset = 0; m_d3dDeviceContext-IASetVertexBuffers( 0, 1, vertexBuffer.GetAddressOf(), &stride, &offset ); m_d3dDeviceContext-IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); // Set the vertex and pixel shader stage state. m_d3dDeviceContext-VSSetShader( vertexShader.Get(), nullptr, 0 ); m_d3dDeviceContext-PSSetShader( pixelShader.Get(), nullptr, 0 ); // Draw the cube. m_d3dDeviceContext-Draw(3,0); // Present the rendered image to the window. Because the maximum frame latency is set to 1, // the render loop will generally be throttled to the screen refresh rate, typically around // 60Hz, by sleeping the application on Present until the screen is refreshed. DX::ThrowIfFailed( m_swapChain-Present(1, 0) ); } } // This method is called before the application exits. void Uninitialize() { } private: // This method is called whenever the application window size changes. void OnWindowSizeChanged( _In_ CoreWindow^ sender, _In_ WindowSizeChangedEventArgs^ args ) { m_renderTargetView = nullptr; CreateWindowSizeDependentResources(); } // This method creates all application resources that depend on // the application window size. It is called at app initialization, // and whenever the application window size changes. void CreateWindowSizeDependentResources() { if (m_swapChain != nullptr) { // If the swap chain already exists, resize it. DX::ThrowIfFailed( m_swapChain-ResizeBuffers( 2, 0, 0, DXGI_FORMAT_R8G8B8A8_UNORM, 0 ) ); } else { // If the swap chain does not exist, create it. DXGI_SWAP_CHAIN_DESC1 swapChainDesc = {0}; swapChainDesc.Stereo = false; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.Scaling = DXGI_SCALING_NONE; swapChainDesc.Flags = 0; // Use automatic sizing. swapChainDesc.Width = 0; swapChainDesc.Height = 0; // This is the most common swap chain format. swapChainDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; // Don't use multi-sampling. swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SampleDesc.Quality = 0; // Use two buffers to enable flip effect. swapChainDesc.BufferCount = 2; // We recommend using this swap effect for all applications. swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL; // Once the swap chain description is configured, it must be // created on the same adapter as the existing D3D Device. // First, retrieve the underlying DXGI Device from the D3D Device. ComPtr dxgiDevice; DX::ThrowIfFailed( m_d3dDevice.As(&dxgiDevice) ); // Ensure that DXGI does not queue more than one frame at a time. This both reduces // latency and ensures that the application will only render after each VSync, minimizing // power consumption. DX::ThrowIfFailed( dxgiDevice-SetMaximumFrameLatency(1) ); // Next, get the parent factory from the DXGI Device. ComPtr dxgiAdapter; DX::ThrowIfFailed( dxgiDevice-GetAdapter(&dxgiAdapter) ); ComPtr dxgiFactory; DX::ThrowIfFailed( dxgiAdapter-GetParent( __uuidof(IDXGIFactory2), &dxgiFactory ) ); // Finally, create the swap chain. DX::ThrowIfFailed( dxgiFactory-CreateSwapChainForImmersiveWindow( m_d3dDevice.Get(), DX::GetIUnknown(m_window), &swapChainDesc, nullptr, // allow on all displays &m_swapChain ) ); } // Once the swap chain is created, create a render target view. This will // allow Direct3D to render graphics to the window. ComPtr backBuffer; DX::ThrowIfFailed( m_swapChain-GetBuffer( 0, __uuidof(ID3D11Texture2D), &backBuffer ) ); DX::ThrowIfFailed( m_d3dDevice-CreateRenderTargetView( backBuffer.Get(), nullptr, &m_renderTargetView ) ); // After the render target view is created, specify that the viewport, // which describes what portion of the window to draw to, should cover // the entire window. D3D11_TEXTURE2D_DESC backBufferDesc = {0}; backBuffer-GetDesc(&backBufferDesc); D3D11_VIEWPORT viewport; viewport.TopLeftX = 0.0f; viewport.TopLeftY = 0.0f; viewport.Width = static_cast(backBufferDesc.Width); viewport.Height = static_cast(backBufferDesc.Height); viewport.MinDepth = D3D11_MIN_DEPTH; viewport.MaxDepth = D3D11_MAX_DEPTH; m_d3dDeviceContext-RSSetViewports(1, &viewport); } }; // This class defines how to create the custom View Provider defined above. ref class Direct3DTutorialViewProviderFactory : IViewProviderFactory { public: IViewProvider^ CreateViewProvider() { return ref new Direct3DTutorialViewProvider(); } }; [Platform::MTAThread] int main(array^) { auto viewProviderFactory = ref new Direct3DTutorialViewProviderFactory(); Windows::ApplicationModel::Core::CoreApplication::Run(viewProviderFactory); return 0; }

    Read the article

  • Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E

    - by Sebastian Bugiu
    I had Ubuntu 11.10 and in the last few weeks I experienced an obscure problem: after I had the computer running for a few days I could no longer connect to google.com or anything related to google. All sites worked with all browsers (Firefox, chrome, opera) except google. It remained in the connecting phase for a few minutes and either timed out or finally connected with this huge delay. Even if I entered other sites such as this one, if it had anything to do with google such adsense or gstatic or whatever with g in it, that site took a long time to load waiting in connecting to gstatic.com . Anything google related took minutes to work, but everything else worked instantly! I tried rebooting or using other machine(with windows on it) and this worked, so it's not network related. But after a few days it started not working again... So I upgraded to the Precise Pangolin hoping this behavior would go away. It didn't! After a few days I get the same behavior as in 11.10. What am I supposed to do? Reboot every other day? I didn't have this problem with neither 10.10 or 11.04. I found the Realtek RTL8168/8111E issue with the r8169 driver but this is not exactly the same card so probably trying r8168 won't help. Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02) Subsystem: Toshiba America Info Systems Device ff1c Flags: bus master, fast devsel, latency 0, IRQ 44 I/O ports at 4000 [size=256] Memory at d0010000 (64-bit, prefetchable) [size=4K] Memory at d0000000 (64-bit, prefetchable) [size=64K] Capabilities: [40] Power Management version 7 Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Endpoint, MSI 01 Capabilities: [ac] MSI-X: Enable- Count=2 Masked- Capabilities: [cc] Vital Product Data Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Capabilities: [160] Device Serial Number 09-00-00-00-ff-ff-00-00 Kernel driver in use: r8169 Kernel modules: r8169

    Read the article

  • Davicom Semiconductor, Inc. 21x4x DEC-Tulip not detected by Wireshark but IP operational

    - by deepsix86
    Recently flipped to Ubuntu 11.10 on a Dell 4300 (Intel). Getting IP address and no issues (ping/surf) but Wireshark unable to detect eth0 interface. I see references in forums to blacklist tulip but looks like I am running dmfe. Not sure if the blacklist is required and where to go from here. Maybe Driver update? Got a little lost looking in that area. Some h/w details below (IP/MAC/HOSTNAME removed) Linux xxxxxx 3.0.0-17-generic #30-Ubuntu SMP Thu Mar 8 17:34:21 UTC 2012 i686 i686 i386 GNU/Linux network-admin (HOSTS TAB) does not list eth0, only loopback and bunch of IPv6 interfaces ifconfig eth0 Link encap:Ethernet HWaddr xxxxxxxx inet addr:192.168.x.xx Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: xxxxxxxxxxx 64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:36662 errors:0 dropped:1 overruns:0 frame:0 TX packets:24975 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:42115779 (42.1 MB) TX bytes:3056435 (3.0 MB) Interrupt:18 Base address:0xe800 lspci 02:09.0 Ethernet controller: Davicom Semiconductor, Inc. 21x4x DEC-Tulip compatible 10/100 Ethernet (rev 31) Subsystem: Device 4554:434e Flags: bus master, medium devsel, latency 64, IRQ 18 I/O ports at e800 [size=256] Memory at fe1ffc00 (32-bit, non-prefetchable) [size=256] Expansion ROM at fe200000 [disabled] [size=256K] Capabilities: [50] Power Management version 2 Kernel driver in use: dmfe Kernel modules: dmfe hwinfo --netcard 20: PCI 209.0: 0200 Ethernet controller [Created at pci.318] Unique ID: rBUF.0NgK5ZS9c0D Parent ID: 6NW+.siohrLUzzI4 SysFS ID: /devices/pci0000:00/0000:00:1e.0/0000:02:09.0 SysFS BusID: 0000:02:09.0 Hardware Class: network Model: "Davicom 21x4x DEC-Tulip compatible 10/100 Ethernet" Vendor: pci 0x1282 "Davicom Semiconductor, Inc." Device: pci 0x9102 "21x4x DEC-Tulip compatible 10/100 Ethernet" SubVendor: pci 0x4554 SubDevice: pci 0x434e Revision: 0x31 Driver: "dmfe" Driver Modules: "dmfe" Device File: eth0 I/O Ports: 0xe800-0xe8ff (rw) Memory Range: 0xfe1ffc00-0xfe1ffcff (rw,non-prefetchable) Memory Range: 0xfe200000-0xfe23ffff (ro,non-prefetchable,disabled) IRQ: 18 (61379 events) HW Address: 00:08:a1:01:35:70 Link detected: yes Module Alias: "pci:v00001282d00009102sv00004554sd0000434Ebc02sc00i00" Driver Info #0: Driver Status: dmfe is active Driver Activation Cmd: "modprobe dmfe" Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #11 (PCI bridge)

    Read the article

  • Set umask, set permissions, and set ACL, but SAMBA isn't using those?

    - by Kris Anderson
    I'm running on Ubuntu Server 12.04. I have a folder called Music and I want the default folder permissions to be 775 and the default file to then be 664. I set the default permissions on the Music folder to be 775. I configured ACL to use these default permissions as well: file: Music owner: kris group: kris flags: ss- user::rwx group::rwx other::r-x default:user::rwx default:group::rwx default:other::r-x I also changed the default umask for my user account, kris, to 002 in .profile. Shouldn't and new file/folder now use those permissions when writing to the Samba share? ACL should work with Samba from what I can gather. Currently, if I write to that folder using my mac, folders are getting 755 and files 644. I have another app on my mac called GoodSync which which is able to sync a local directory on my mac to a network samba share, but those permissions are even worse. files are being written as 700 using that program. So it looks like Samba is allowing the host/program to determine the folder/file permissions. What changes do I need to make to force the permissions I want regardless of what the host tries to write on the server?

    Read the article

  • Why can I not map a dynamic texture in D3D?

    - by sebf
    I am trying to map a Texture2D resource in DirectX11 via SharpDX. The resource is declared as a ShaderResource, with Dynamic usage and the 'Write' CPU flag specified. My call however fails with a generic exception from SharpDX: _Parent.Context.MapSubresource( _Resource, 0, SharpDX.Direct3D11.MapMode.Write, SharpDX.Direct3D11.MapFlags.None, out stream ); I see from this question that it is supported. The MSDN docs and this other question hint that instead of using Context.MapSubresource() I should be using Texture2D.Map(), however, the DirectX11 Texture2D class does not define Map() (though it does for the D3D 10 equivalent). If I call the above with MapMode.WriteDiscard, the call succeeds but in this case the previous content of the texture is lost, which is no good when I only want to update a section of it. Has the Map() method been removed in Direct3D 11 or am I looking in the wrong place? Is the MapSubresource() method unsuitable or am I using it wrong? EDIT: I declared my resource as Dynamic with CPU Write Flags - not Default as I originaly wrote - sorry for the fairly huge 'typo' that changes the entire question!

    Read the article

  • How to push changes from Test server to Live server?

    - by anonymous
    As a beginner, I finally noticed the issue with making changes to the live server I've been working on, now that I have a couple users on it, since I bring it down so often. I created an EC2 image of my live server and set up a separate instance on EC2, so now I have 2 EC2 instances, Stage and Production. I set up GitHub and push changes to stage and test my code there, and when it's all done and working, I push it to the production branch, and everything is good. And there is a slight issue here since I name my files config_stage.js and config_production.js and set up .gitignore on each server, and in my code, I would have it read the ENV flags and set up the appropriate configs, is this the correct approach? And my main question is: how do you keep track of non-code changes to the server? For example, I installed HAProxy, Stunnel, Redis, MongoDB and several other things onto the Stage server for testing and now that it's all working and good, how do I deploy them to production? Right now, I'm just keeping track of everything I installed and copying configuration files over, which is very tedious and I'm afraid I may have missed a step somewhere. Is there a better way to port these changes over from my test server to my live server?

    Read the article

  • Using old code on new version of Visual Studio [migrated]

    - by Tu Tran
    I have a project which was started from 90s in C/C++. Therefore, it contains many old coding styles such as K&R-style function declaration, obsolete function, ... The project works fine in Visual Studio 2008, but now I want to use it in the new version of Visual Studio (specifically VS 2010) because we have other projects in Visual Studio 2010/2012. I don't want to have too many versions of Visual Studio on my machine. When I try to compile the old project, Visual Studio throws too many errors. I can fix all of them but I am scared to edit the source code and I want other people to be able to pen it in the old version of VS too. I want the project to remain backwards compatible with VS. My question is how to use the old code in Visual Studio 2010/2012 without changing the code. Or if necessary how do I just fix a few lines of code, but make sure it won't cause an error if someone else opens that code in the older version of VS. Is there a way to tell newer Visual Studio versions to use older compiler flags or something like that?

    Read the article

  • Access PC Settings Easily from Your Desktop in Windows 8 and 8.1

    - by Akemi Iwaya
    Accessing your system’s settings in Windows 8 is not exactly the most straight-forward of processes, so if you need to change your settings often, then it can be a bit frustrating. With that in mind, the good folks over at 7 Tutorials have created an awesome shortcut that will take all the hassle out of accessing those settings, and make ‘tweaking’ Windows 8 much easier. After downloading the zip file, extract the exe file and place it in an appropriate folder, then create a shortcut. Once you have the new shortcut set up in the desired location (i.e. desktop or pinned to the taskbar), accessing your system’s settings has never been easier in Windows 8 and 8.1! Special Note: If you are someone who runs files through VirusTotal before using them, be aware that two listings there (Commtouch and Symantec) will flag the file as malware. We had no problems on our system whatsoever and believe the malware flags to be false positives. Download the Desktop Shortcut to PC Settings, for Windows 8 & 8.1 [7 Tutorials]     

    Read the article

  • Broadcom Corporation NetLink BCM57785 Gigabit Ethernet PCIe driver tg3 will not install?

    - by Pete
    aries@aries-laptop:~$ sudo ifconfig eth0 up eth0: ERROR while getting interface flags: No such device aries@aries-laptop:~$ lspci -nn 00:00.0 Host bridge [0600]: Advanced Micro Devices [AMD] Device [1022:1705] 00:01.0 VGA compatible controller [0300]: ATI Technologies Inc Device [1002:9641] 00:01.1 Audio device [0403]: ATI Technologies Inc Device [1002:1714] 00:04.0 PCI bridge [0604]: Advanced Micro Devices [AMD] Device [1022:1709] 00:06.0 PCI bridge [0604]: Advanced Micro Devices [AMD] Device [1022:170b] 00:11.0 SATA controller [0106]: Advanced Micro Devices [AMD] Device [1022:7800] (rev 40) 00:12.0 USB Controller [0c03]: Advanced Micro Devices [AMD] Device [1022:7807] (rev 11) 00:12.2 USB Controller [0c03]: Advanced Micro Devices [AMD] Device [1022:7808] (rev 11) 00:13.0 USB Controller [0c03]: Advanced Micro Devices [AMD] Device [1022:7807] (rev 11) 00:13.2 USB Controller [0c03]: Advanced Micro Devices [AMD] Device [1022:7808] (rev 11) 00:14.0 SMBus [0c05]: Advanced Micro Devices [AMD] Device [1022:780b] (rev 13) 00:14.2 Audio device [0403]: Advanced Micro Devices [AMD] Device [1022:780d] (rev 01) 00:14.3 ISA bridge [0601]: Advanced Micro Devices [AMD] Device [1022:780e] (rev 11) 00:14.4 PCI bridge [0604]: Advanced Micro Devices [AMD] Device [1022:780f] (rev 40) 00:16.0 USB Controller [0c03]: Advanced Micro Devices [AMD] Device [1022:7807] (rev 11) 00:16.2 USB Controller [0c03]: Advanced Micro Devices [AMD] Device [1022:7808] (rev 11) 00:18.0 Host bridge [0600]: Advanced Micro Devices [AMD] Device [1022:1700] (rev 43) 00:18.1 Host bridge [0600]: Advanced Micro Devices [AMD] Device [1022:1701] 00:18.2 Host bridge [0600]: Advanced Micro Devices [AMD] Device [1022:1702] 00:18.3 Host bridge [0600]: Advanced Micro Devices [AMD] Device [1022:1703] 00:18.4 Host bridge [0600]: Advanced Micro Devices [AMD] Device [1022:1704] 00:18.5 Host bridge [0600]: Advanced Micro Devices [AMD] Device [1022:1718] 00:18.6 Host bridge [0600]: Advanced Micro Devices [AMD] Device [1022:1716] 00:18.7 Host bridge [0600]: Advanced Micro Devices [AMD] Device [1022:1719] 01:00.0 Ethernet controller [0200]: Broadcom Corporation NetLink BCM57785 Gigabit Ethernet PCIe [14e4:16b5] (rev 10) 01:00.1 SD Host controller [0805]: Broadcom Corporation Device [14e4:16bc] (rev 10) 01:00.2 System peripheral [0880]: Broadcom Corporation Device [14e4:16be] (rev 10) 01:00.3 System peripheral [0880]: Broadcom Corporation Device [14e4:16bf] (rev 10) 02:00.0 Network controller [0280]: Broadcom Corporation Device [14e4:4358]

    Read the article

  • Problem running Qreator on Xubuntu-14.04

    - by Seyed Mohammad
    I installed Qreator using apt-get on Xubuntu-14.04: $ sudo apt-get install qreator But the application fails to start! When I try to run it via Terminal, the following error messages are printed and the program aborts: $ qreator ** (qreator:3859): WARNING **: Couldn't connect to accessibility bus: Failed to connect to socket /tmp/dbus-Gh2FPHrMr2: Connection refused No handlers could be found for logger "qreator_lib" Traceback (most recent call last): File "/usr/bin/qreator", line 47, in <module> qreator.main() File "/usr/lib/python2.7/dist-packages/qreator/__init__.py", line 63, in main window = QreatorWindow.QreatorWindow() File "/usr/lib/python2.7/dist-packages/qreator_lib/Window.py", line 48, in __new__ new_object.finish_initializing(builder) File "/usr/lib/python2.7/dist-packages/qreator/QreatorWindow.py", line 79, in finish_initializing self.init_qr_types() File "/usr/lib/python2.7/dist-packages/qreator/QreatorWindow.py", line 135, in init_qr_types self.qr_types = [d(self.update_qr_code) for d in QRCodeType.dataformats] File "/usr/lib/python2.7/dist-packages/qreator/qrcodes/QRCodeType.py", line 71, in __init__ self.create_widget() # pylint: disable=E1101 File "/usr/lib/python2.7/dist-packages/qreator/qrcodes/QRCodeLocation.py", line 29, in create_widget self.widget = QRCodeLocationGtk(self.qr_code_update_func) File "/usr/lib/python2.7/dist-packages/qreator/qrcodes/QRCodeLocationGtk.py", line 49, in __init__ latitude, longitude = get_current_location() File "/usr/lib/python2.7/dist-packages/qreator/qrcodes/QRCodeLocationGtk.py", line 109, in get_current_location '/org/freedesktop/Geoclue/Providers/Hostip') File "/usr/lib/python2.7/dist-packages/dbus/bus.py", line 241, in get_object follow_name_owner_changes=follow_name_owner_changes) File "/usr/lib/python2.7/dist-packages/dbus/proxies.py", line 248, in __init__ self._named_service = conn.activate_name_owner(bus_name) File "/usr/lib/python2.7/dist-packages/dbus/bus.py", line 180, in activate_name_owner self.start_service_by_name(bus_name) File "/usr/lib/python2.7/dist-packages/dbus/bus.py", line 278, in start_service_by_name 'su', (bus_name, flags))) File "/usr/lib/python2.7/dist-packages/dbus/connection.py", line 651, in call_blocking message, timeout) dbus.exceptions.DBusException: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.Geoclue.Providers.Hostip was not provided by any .service files How can I fix this ?

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >