python django 响应时间长但不知道哪里出了问题

python django response time long but don't know what's wrong

我的 django 应用程序使用 django-rest-framework,并且 MySQL。 我测试了我的应用程序,但几乎所有功能的响应时间都很长。不知道是什么问题

这是响应时间最长的函数之一。

   180232 function calls (171585 primitive calls) in 1.110 seconds

   Ordered by: internal time
   List reduced from 757 to 151 due to restriction <0.2>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
      105    0.597    0.006    0.597    0.006 /Users/jyj/.pyenv/versions/logispot_app_env/lib/python3.6/site-packages/MySQLdb/connections.py:268(query)
        2    0.154    0.077    0.174    0.087 /Users/jyj/.pyenv/versions/logispot_app_env/lib/python3.6/site-packages/MySQLdb/connections.py:81(__init__)
        4    0.020    0.005    0.020    0.005 /Users/jyj/.pyenv/versions/logispot_app_env/lib/python3.6/site-packages/MySQLdb/connections.py:254(autocommit)
8800/3582    0.010    0.000    0.828    0.000 {built-in method builtins.getattr}
    20156    0.010    0.000    0.022    0.000 {built-in method builtins.isinstance}
  200/100    0.009    0.000    0.886    0.009 /Users/jyj/.pyenv/versions/logispot_app_env/lib/python3.6/site-packages/rest_framework/serializers.py:479(to_representation)
        2    0.009    0.005    0.009    0.005 {function Connection.set_character_set at 0x109b506a8}
     6920    0.009    0.000    0.009    0.000 {built-in method builtins.hasattr}
                                      ....

此功能是列表的第一页,总计数为 1000,页面大小为 100。每个记录只加入一个 table。查询花费了很长时间,所以我将 Django ORM 更改为原始查询,但时间相同。 (也许我使用了错误的原始查询)

连auth check响应时间长

        2199 function calls (2133 primitive calls) in 0.195 seconds

   Ordered by: internal time
   List reduced from 419 to 84 due to restriction <0.2>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        2    0.153    0.076    0.169    0.084 /Users/jyj/.pyenv/versions/logispot_app_env/lib/python3.6/site-packages/MySQLdb/connections.py:81(__init__)
        4    0.016    0.004    0.016    0.004 /Users/jyj/.pyenv/versions/logispot_app_env/lib/python3.6/site-packages/MySQLdb/connections.py:254(autocommit)
        3    0.014    0.005    0.014    0.005 /Users/jyj/.pyenv/versions/logispot_app_env/lib/python3.6/site-packages/MySQLdb/connections.py:268(query)
        2    0.008    0.004    0.008    0.004 {function Connection.set_character_set at 0x109b506a8}

我认为它必须少于 60ms。 (也许我的想法是错误的)

是django查询速度太慢还是我的应用有问题?我知道是什么问题了。

编辑 查看代码

class OrderListCreationAPI(generics.ListCreateAPIView):
    permission_classes = (
        permissions.IsAuthenticatedOrReadOnly,
        IsAdminOrClient,
    )
    pagination_class = StandardListPagination


    def get_queryset(self):
        if self.request.method == 'GET':
            queryset = CacheOrderList.objects.all()
            return queryset
        else:
            return Order.objects.all()

    def get_serializer_class(self):
        if self.request.method == 'GET':
            return CacheOrderListSerializer
        else:
            return OrderSerializer

序列化器代码

class CacheOrderListSerializer(serializers.ModelSerializer):
    base = CacheBaseSerializer(read_only=True)

    class Meta:
        model = CacheOrderList
        fields = '__all__'

型号代码

class CacheBase(models.Model):
    created_time = models.DateTimeField(auto_now_add=True)
    order = models.OneToOneField('order.Order', on_delete=models.CASCADE, related_name='cache', primary_key=True)
    driver_user = models.ForeignKey('member.DriverUser', on_delete=models.SET_NULL, null=True)
    client_name = models.CharField(max_length=20, null=True)
    load_address = models.CharField(max_length=45, null=True)
    load_company = models.CharField(max_length=20, null=True)
    load_date = models.DateField(null=True)
    load_time = models.CharField(max_length=30)
    unload_address = models.CharField(max_length=45, null=True)
    unload_company = models.CharField(max_length=20, null=True)
    unload_date = models.DateField(null=True)
    unload_time = models.CharField(max_length=30)
    stop_count = models.IntegerField(default=0)
    is_round = models.BooleanField(default=False)
    is_mix = models.BooleanField(default=False)
    car_ton = models.CharField(max_length=15, null=True)
    weight = models.FloatField(null=True)
    payment_method = models.BooleanField(default=s.ORDER_PAYMENT_METHOD_ADVANCE)
    contract_fee = models.IntegerField(null=True)
    driver_fee = models.IntegerField(null=True)
    order_fee = models.IntegerField(null=True)
    is_deleted = models.BooleanField(default=False)


class CacheOrderList(models.Model):
    base = models.OneToOneField(CacheBase, on_delete=models.CASCADE, related_name='order_list')
    order_status = models.IntegerField(null=True)
    order_created_time = models.DateTimeField(null=True)
    car_type = models.CharField(max_length=10, null=True)
    asignee = models.CharField(max_length=20, null=True)
    objects = CacheManager()

    class Meta:
        ordering = ('-order_created_time',)
        db_table = 'CacheOrderList'

EDIT2 在列表功能中,应用程序已经获得了 100 个项目,但之后,应用程序再次查询 1 个 1。不使用已经获得的项目。所以每次查询花费 100 * 时间。

这可能是因为序列化器没有使用分页得到的记录。所以分页 2 查询 + 序列化器 100 查询 + 其他。

我不知道为什么会这样。

在 CacheOrderList 查询集中为基字段使用 select_related。这将准备一个查询集缓存,其中包含您在方法中提供的外键关联,基本上不会一次又一次地命中数据库。

示例

def get_queryset(self):
    if self.request.method == 'GET':
        # prepare related models cache using `select_related`
        queryset = CacheOrderList.objects.all().select_related('base')
        return queryset
    else:
        return Order.objects.all()